[go: up one dir, main page]

CN118020136A - Improved guidance for electron microscopes - Google Patents

Improved guidance for electron microscopes Download PDF

Info

Publication number
CN118020136A
CN118020136A CN202280063613.2A CN202280063613A CN118020136A CN 118020136 A CN118020136 A CN 118020136A CN 202280063613 A CN202280063613 A CN 202280063613A CN 118020136 A CN118020136 A CN 118020136A
Authority
CN
China
Prior art keywords
image frame
view
field
microscope
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280063613.2A
Other languages
Chinese (zh)
Inventor
彼得·斯泰瑟姆
菲利普·皮纳尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oxford Instruments Nanotechnology Tools Ltd
Original Assignee
Oxford Instruments Nanotechnology Tools Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oxford Instruments Nanotechnology Tools Ltd filed Critical Oxford Instruments Nanotechnology Tools Ltd
Priority claimed from PCT/GB2022/051946 external-priority patent/WO2023002226A1/en
Publication of CN118020136A publication Critical patent/CN118020136A/en
Pending legal-status Critical Current

Links

Landscapes

  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

描述了一种用于在显微镜中分析样品的方法。该方法包括:使用第一检测器和不同于第一检测器的第二检测器获取一系列的合成图像帧,并在视觉显示器上实时显示该一系列的合成图像帧,其中视觉显示器被更新以依次显示每个合成图像帧。获取合成图像帧包括:使带电粒子束遍历样品的区域,该区域对应于显微镜的配置的视场,其中:当模式参数具有第一值时,束的遍历沿着区域上的第一遍历路径并且根据第一集合的遍历条件,并且当模式参数具有第二值时,束的遍历沿着区域上的第二遍历路径并且根据第二集合的遍历条件,其中束根据第一集合的遍历条件遍历整个第一遍历路径所需的第一总时间小于束根据第二集合的遍历条件遍历整个第二遍历路径所需的第二总时间;使用第一检测器监测在区域内的第一多个位置处在样品内生成的第一集合的所得粒子,以便获得第一图像帧,第一图像帧包括与第一多个位置相对应并且具有从第一多个位置处生成的被监测粒子导出的值的多个像素,使用第二检测器监测在区域内的第二多个位置处在样品内生成的第二集合的所得粒子,以便获得第二图像帧,第二图像帧包括与第二多个位置相对应并且具有从第二多个位置处生成的被监测粒子导出的值的相应集合的多个像素,以及组合第一图像帧和第二图像帧以便产生合成图像帧,使得合成图像帧提供从在区域内的第一多个位置和第二多个位置处生成并且由第一检测器和第二检测器中的每一个监测到的粒子导出的数据。

A method for analyzing a sample in a microscope is described. The method includes: acquiring a series of composite image frames using a first detector and a second detector different from the first detector, and displaying the series of composite image frames on a visual display in real time, wherein the visual display is updated to display each composite image frame in sequence. Acquiring the composite image frames includes: causing a charged particle beam to traverse a region of the sample, the region corresponding to a configured field of view of the microscope, wherein: when a mode parameter has a first value, traversal of the beam follows a first traversal path over the region and according to a first set of traversal conditions, and when the mode parameter has a second value, traversal of the beam follows a second traversal path over the region and according to a second set of traversal conditions, wherein a first total time required for the beam to traverse the entire first traversal path according to the first set of traversal conditions is less than a second total time required for the beam to traverse the entire second traversal path according to the second set of traversal conditions; monitoring a first set of resulting particles generated within the sample at a first plurality of locations within the region using the first detector so as to obtain a first image frames, a first image frame including a plurality of pixels corresponding to a first plurality of positions and having values derived from monitored particles generated at the first plurality of positions, monitoring a second set of resulting particles generated within the sample at a second plurality of positions within a region using a second detector to obtain a second image frame, the second image frame including a plurality of pixels corresponding to the second plurality of positions and having a corresponding set of values derived from monitored particles generated at the second plurality of positions, and combining the first image frame and the second image frame to produce a composite image frame, such that the composite image frame provides data derived from particles generated at the first plurality of positions and the second plurality of positions within the region and monitored by each of the first detector and the second detector.

Description

Improved steering for electron microscopes
Technical Field
The present invention relates to a method for analyzing a sample in a microscope, and a system for analyzing a sample. In particular, the invention may provide the user with improved guidance (navigation) around the sample and may assist the user by: combining information from multiple signals (even signals with poor signal-to-noise ratio) and providing a display that allows a user to interact with the information source to efficiently and effectively explore over a large area.
Background
Fig. 2 shows a typical system for probing the surface of a sample in a Scanning Electron Microscope (SEM). An electron beam (electron beam) is generated inside the vacuum chamber and is typically focused using a combination of magnetic or electrostatic lenses. When the electron beam impinges on the sample, some electrons scatter back from the sample (backscattered electrons or BSE) or interact with the sample to produce Secondary Electrons (SE) and many other emissions, such as X-rays.
An electronic detector, typically designed to respond to the intensity of SE or BSE from the sample, is connected to the signal processing electronics and generates a signal corresponding to the portion of the sample that is impinged by the focused beam (beam). The X-ray photons emitted from this portion of the sample will also strike the X-ray detector and with associated signal processing, the individual photon energies can be measured and signals generated corresponding to the characteristic emission lines of the chemical elements present under the beam. The focused electron beam is scanned over the surface of the sample using a beam deflector to traverse an area defining a field of view FOV of the sample surface, which area is to be displayed as a visual image. Such traversal is typically performed in a "raster" fashion, wherein the beam position is driven along a row in the X-direction of the cartesian coordinate system, and at the end of the row, the position is quickly driven ("retraced") to the beginning of the next row, which is a small increment further below the area in the Y-direction. Thus, the region is scanned line by line until the beam traverses the entire region to cover the FOV. As the beam scans along the lines within the FOV, the signal from the electron detector may be electronically filtered and sampled for a fixed period Te, or integrated for a fixed period Te to give a result representative of the sample surface covered in each period. If the FOV is covered by a grating with Ny rows and each row within the FOV takes time L, the total number of recorded results will be Ny x L/Te. Each result constitutes a value of a pixel in the digital image, where for each full frame of data covering the FOV, the total number of pixels npe=ny×l/Te. When the digital image frame is sent to the visual display unit, the pixel values control the brightness, and the pixel locations on the display correspond to locations on the sample surface for each result. Thus, if the visual display unit is much larger than the FOV on the sample, the displayed image will show a greatly enlarged area of the sample surface, and the "magnification" or "MAG" is formally the ratio of the width of the visual display screen to the width of the area scanned on the sample surface. The microscope or SEM makes visible a field of view of a region on the sample surface, which is controlled by the electron beam energy, the electron lens arrangement, the field used to deflect the focused electron beam, and the distance from the sample surface to the final lens. The visual display monitor will typically display the largest image possible, as well as other controls and information of the graphical user interface. The largest image of the field of view on the sample corresponds to the microscope configuration where the electron beam is scanned over the entire field of view. If the electron beam scans slowly, the signal-to-noise ratio of the electronic image is better, but the display update rate is slow. When the user wants to adjust the focus (focus) or astigmatism (astigmatism) to get a better image, a fast image update with good S/N is required. Thus, many SEMs provide a "reduced raster" capability that maintains a slow scan rate, but scans over a reduced area on the sample, and the results are shown on a correspondingly reduced area on the visual display. Thus, the magnification and S/N are preserved, but the display update rate is much faster. In this way, the "reduced raster" produces a modified smaller field of view, which is actually a sub-portion at the center of the image of the configured field of view, that updates fast enough to allow for interactive adjustment of focus and astigmatism.
In addition to conventional "raster" scanning, there are many other methods by which the focused beam can traverse the FOV at the desired spatial resolution of the image. One common approach is to use an "interlace raster (INTERLACED RASTER)" scan, where in order to collect data for Ny rows, two passes through the region are required, where in the first pass the beam is first directed along a row, then a row below the row is skipped, and this process is repeated until Ny/2 "even numbered" rows have been covered, then in the second pass the beam is directed along all rows that have been skipped to cover the remaining Ny/2 rows required for the entire traversal of the FOV.
In another example, the beam may be driven along a "serpentine" path as shown in FIG. 1. With this type of traversal, if an electronic signal measurement is made within a period Te, the beam path during that period may be arranged to traverse a small rectangular region that will correspond to a pixel in the acquired digital image frame. If the pixel value obtained in this period is used to control the brightness of an equivalent rectangular area on the visual display unit, the brightness will be more representative of that area than a conventional raster in which there is a gap between successive scan lines in the Y direction.
When the electron beam traverses the FOV and the electron signal measurements are recorded, a histogram of X-ray photon energy measurements equivalent to the X-ray energy spectrum may be obtained over a period Tx to obtain a set of values representing the area on the sample surface traversed during the period Tx. By repeating this acquisition at fixed intervals Tx while the beam traverses the FOV, a set of pixel values of a single frame of "spectral image" can be obtained, where each pixel has a set of values representing the X-ray spectrum emitted from the region traversed during period Tx. If tx=te, then the number of pixels in the X-ray spectral image Npx =npe. However, if Tx > Te, npx < Npe, each pixel in the X-ray spectral image will represent a larger area on the sample surface than a pixel in the digital electronic image.
In an alternative scanning strategy, the focused electron beam is held at one position in a rectangular grid covering Npe points of the FOV, and the electronic signal is measured for a period Te, and the result is stored in a corresponding pixel of the digital image. When this process is repeated for each point in the rectangular grid, a complete "frame" of electronic image data containing Npe pixels has been acquired. If the pixel values are used to control the brightness of a rectangular area on the visual display unit corresponding to an equivalent rectangular area centered on the beam position on the sample surface, the displayed image will be a magnified image of the FOV on the sample surface. If the incident electron beam is slightly defocused such that the beam spot covers the area between grid points, the value of each pixel in the digital electronic image will represent the average signal value over the area near each beam location, so that in a single frame, signals will be obtained from the entire area of the FOV on the sample surface, rather than from a grid of discrete locations. The X-ray spectrum can also be acquired during a time Tx that may be greater or less than Te when the beam is at a point. Since the beam is located at all Npe grid locations, a single frame of the X-ray spectral image can be acquired with Npe pixels, each having a set of values corresponding to histograms of photon energies obtained at a corresponding location on the sample surface or a small region near that location. Alternatively, if the beam is positioned at grid points in sequence along the serpentine path, the X-ray spectrum may continue to be acquired for a period Tx while the beam is positioned at a series of grid positions along the path. If an electronic signal measurement is made at each point while an X-ray spectrum is acquired for a series of points, a single pixel in the X-ray spectrum image may correspond to a rectangular area on the sample covering many grid locations, while each pixel in the digital electronic image corresponds to a grid point on the sample surface.
In another strategy of acquiring signals, the beam is positioned at a series of points on a grid covering the FOV, and both the electronic signal measurements and the X-ray spectrum are recorded at each point on the grid. The X-ray spectra from the set of points covering the small rectangular areas on the sample are then summed to give a single spectrum for each small rectangular area. Thus, the obtained X-ray spectral image will contain fewer pixels than the digital electronic image, wherein each pixel in the X-ray spectral image corresponds to a larger area on the sample surface than a pixel in the digital electronic image.
In another variant of interlacing, the beam is also positioned at a series of Npe points on a rectangular grid covering the FOV area of the sample. However, the order of the points at which the beams are positioned is arranged such that the beams are first positioned on Npe/4 grid points covering the entire FOV area, then the beams are positioned on Npe/4 grid points covering a different set of the entire FOV area, and the process is repeated 4 times until the beams have been positioned on all Npe grid points to complete the full traversal of the area. This can be considered as the beam passes 4 times over the entire FOV, each visit to a location on 4 coarser sub-grids with twice the dot spacing, but the total time to complete a full resolution traversal of the FOV remains the same as if the beam were positioned on each grid point in a single pass through the region. This variant is sometimes referred to as 2 x 2 interlacing.
The above examples are not exhaustive, but are intended to show that when an electron beam traverses an area of the sample surface corresponding to a field of view, a single frame of a digital electron image containing Ne pixels and a single frame of an X-ray spectrum image containing Nx pixels, where Nx is typically less than or equal to Ne, can be obtained from the same field of view.
Typically, the "field of view" on an SEM is up to 1cm in size, but this can be much larger or smaller, and if the digital image is displayed on a fixed-size monitor, the size of the field of view effectively determines the magnification such that a smaller field of view represents a higher magnification. The size of the sample to be inspected is typically much larger than the maximum field of view achievable by deflection of the electron beam, and in order to probe the entire sample surface, a controller is typically required to move the support or stage supporting the sample, and this can typically move the scan field of view by a few centimeters. A similar system is used in electron microscopy, where the sample is thin enough to allow the beam to be transmitted through the sample (scanning transmission electron microscope or STEM). In this case, the range of beam deflection and stage movement is typically less than the range of the SEM.
When the electron beam impinges on the sample, the number of electrons emitted from the sample is typically several orders of magnitude higher than the number of X-ray photons generated. Thus, any X-ray image derived from the acquired X-ray data typically has a much worse signal-to-noise ratio (S/N) than the electronic image, and it is desirable to improve the X-ray image using the best available method. The amount of X-rays collected by the detector is determined by the solid angle that the X-ray detector encloses (subtend) at the point where the electron beam impinges on the sample. For an arrangement where the X-ray detector is located on the side of the electron beam as shown in fig. 2, the collection solid angle is maximized by using a large area detector or by placing the detector very close to the sample. In a different arrangement, the X-ray detector uses a plurality of sensors arranged around the incident electron beam to maximize the total collection solid angle. In this "on-axis" arrangement, the X-ray detector is located between the final lens aperture and the sample, and the electron beam travels through the gap between the sensors.
Even when the collection solid angle is maximized, the signal-to-noise ratio of the derived X-ray image for a single frame is typically worse than for an electronic image, and this makes it difficult for a user to see details in a single image frame when the dwell time of each pixel is short. If the dwell time is extended to improve the signal-to-noise ratio, the time to complete the image frame increases and the user must wait longer to see an image covering the entire field of view. An important innovation in X-ray imaging is the technique of recording the scan position and energy of a single photon so that the stored data can be processed to produce an X-ray image from any desired characteristic chemical element emission (Mott and Friel,1999,Journal of Microscopy,Vol.193,Pt 1,January 1999,pp.2-14). Mott and Friel use a small dwell time and repetitively scan the same field of view while continuously accumulating data, rather than using a large dwell time per pixel for a single scan. Their system is programmed to repeatedly prepare the X-ray image for display using the accumulated spectral data at each pixel such that the derived X-ray elemental map exhibits a progressively decreasing feeling of particles (grainy) as the S/N improves as new data frames are added. This method of acquiring X-ray data and displaying derived X-ray elemental maps and observing the resulting images for improvement over time has now been in common use for nearly twenty years.
When users need to explore the sample to find a region of interest, they typically use SEM displays that have been optimized to interact quickly with the electronic image. SEM typically displays high signal-to-noise electronic images that are refreshed per frame and use a fast frame rate such that if the focal length or magnification changes or the field of view shifts (e.g., by moving a support or stage that supports the sample or adding an offset to the scan deflection), the user sees the new image at a fast enough rate to interact effectively. Even though the frame rate may be below 50Hz, the update rate that is high enough to track the moving features is often referred to as a "TV rate" similar to a home television. After setting the magnification of the electronic image such that the field of view covered by the electron beam scan is suitable for displaying the type of feature of interest on the sample surface, the user will move the stage while viewing the electronic image to find a region that may contain a chemical element or compound of interest. When a possible region appears in the field of view, the user will stop the stage movement, then adjust the scan rate and start the X-ray acquisition, and observe the element map as the S/N improves frame by frame, as described by Mott and Friel. If the distribution of elements or compounds in the field of view is found to be unsuitable soon, the user will return to an interactive investigation using fast frame rate electronic images and move the stage to find a more suitable area to acquire X-ray data. The cycle of returning the electronic image for probing, periodically stopping to acquire enough X-ray data to check if the field of view has the proper distribution of the desired elements, and returning the electronic image for probing if it does not have the proper distribution of the desired elements is inefficient and the user may also miss areas on the sample containing the material of interest as he or she tries to navigate over a larger area on the sample.
When the user's task is to find areas containing specific chemical elements or compounds or materials with certain properties, the problem is that the electronic image does not provide enough information. The SE signal shows the surface topography well, the BSE signal may indicate the average atomic number of the material, but no signal provides specific information about the chemical element content or material properties, so the user must guess whether a region may be worth acquiring additional data from it to provide such information. The derived X-ray images may provide information about the chemical element content, but have poor S/N, and do not provide any topographical details or sufficiently high resolution images to help the user know their location on the sample. Thus, there is a need for an efficient method for visualizing material properties such as chemical element content while guiding a user over a large area on a sample to find a material of interest.
WO2019/016559A1 discloses an analysis method which involves displaying microscope images of a sample acquired using two different types of detectors and thus having different image acquisition properties and representing different information about the sample in combination in real time as the image is acquired. This technique improves the speed and efficiency with which a user of the microscope device can navigate through different regions of a sample and locate features of interest thereon. The reason for this benefit is not only that displaying both types of images showing the same region of the sample simultaneously in the same field of view may allow the user to quickly identify features of potential interest based on the first type of image, which may for example show the physical shape or topography of the sample surface while guiding around the sample, but also after locating these potential features, the field of view of the microscope is maintained such that it continues to contain these features in order to acquire or accumulate the second type of image data to obtain a different type of information about that region of the sample than the first type of image provides.
However, there remains a need for an improved analysis method whereby the speed and efficiency with which material properties can be visualized when guided over a large sample area, as well as the quality of visual data available for the area of interest, are further improved.
Disclosure of Invention
According to a first aspect of the present invention there is provided a method for analysing a sample in a microscope, the method comprising:
acquiring a series of composite image frames using a first detector and a second detector different from the first detector, wherein acquiring the composite image frames comprises:
a) Traversing a charged particle beam (CHARGED PARTICLE beam) through a region of a sample, the region corresponding to a field of view of a configuration of the microscope, wherein:
When the mode parameter has a first value, the traversal of the beam follows a first traversal path over the region and according to a first set of traversal conditions, an
When the mode parameter has a second value, the traversal of the beam follows a second traversal path over the region and, according to a second set of traversal conditions,
Wherein a first total time required for the bundle to traverse the entire first traversal path according to the traversal condition of the first set is less than a second total time required for the bundle to traverse the entire second traversal path according to the traversal condition of the second set;
b) Monitoring a first set of resulting particles generated within the sample at a first plurality of locations within the region using the first detector to obtain a first image frame comprising a plurality of pixels corresponding to the first plurality of locations and having values derived from the monitored particles generated at the first plurality of locations,
C) Monitoring, using the second detector, a second set of resulting particles generated within the sample at a second plurality of locations within the region to obtain a second image frame comprising a plurality of pixels corresponding to the second plurality of locations and having a respective set of values derived from the monitored particles generated at the second plurality of locations, an
D) Combining the first image frame and the second image frame to produce the composite image frame such that the composite image frame provides data derived from particles generated at the first plurality of locations and the second plurality of locations within the region and monitored by each of the first detector and the second detector;
and displaying the series of composite image frames on a visual display in real time, wherein the visual display is updated to display each composite image frame in turn.
This method provides further advantages over existing electron microscopy analysis techniques when guiding and collecting data from a sample. The inventors have devised a method that provides additional benefits in terms of signal-to-noise ratio and efficient and rapid sample guidance. This is achieved by changing the mode of scanning the sample by the beam when acquiring data of the image frames based on whether the field of view of the microscope is changing or stationary. In particular, the method provides an advantageous switching between a faster mode of image frame acquisition when the field of view is changed and a slower mode when the field of view is unchanged. The change in time required to acquire data of a composite image frame, or the change in time required to acquire data from the entire traversal path or for the entire frame for a given mode, or the change in acquisition speed may be implemented in a variety of ways. These include changing the resolution of the acquired image, changing the average time it takes to acquire data from a location on the sample surface, and changing the extent to which the traversal path covers the area on the sample. Implementing this switching approach provides significant advantages during sample analysis, where an operator can typically steer the sample in the microscope by moving the field of view relative to the sample until a potential feature or region of interest is identified, and then stopping the movement while collecting further data from that region or feature. By adjusting the way the beam traverses the sample surface and monitors the resulting particles, the speed and efficiency with which material properties can be visualized when guided over a large sample area, as well as the quality of visual data available for the area of interest, are improved over prior methods in response to how the sample is guided using a microscope.
The method is convenient for real-time tracking of the sample under the microscope by displaying the combined image in real time. The sequential and rapid presentation of a series of acquired composite image frames provides the operator with a "live" view of the sample being analyzed by the two detectors. In the context of the present disclosure, a series may be understood as a plurality of composite image frames that occur one after the other. A series may be considered to have a sequential order. Typically, this is the order in which the composite image frames are acquired, or corresponds to the order in which the composite image frames are acquired, and/or the order in which their respective component frames (i.e., the first image frame and the second image frame) are acquired. Typically, the sequence of the series of composite image frames is the same as the sequence in which they are displayed.
The series does not exclude that the set of composite image frames is a larger set of frames or a subset of the series of frames. A series also does not necessarily exclude the possibility of overlapping with another set or series of acquired frames in time and/or with respect to the composite image frames in each set or series. As discussed later in this disclosure, for example, a series may be interrupted by additional frames that are not considered part of the series. For example, intermediate composite image frames may be acquired in the same or different ways, and such interactive composite image frames will not be considered part of the series. Preferably, however, the series is an uninterrupted series.
In the present disclosure, the feature that acquiring a composite image frame includes the steps described above may be understood to mean that acquiring each composite image frame in a series includes these steps in an exemplary embodiment.
The mode parameter may be referred to as a scan mode parameter. This naming is appropriate because the mode parameter values affect the manner in which beam-to-region traversal is performed, which may also be referred to as beam scanning.
In general, in the present disclosure, a parameter having a value may be understood as a parameter having a value equal to the specific value, which is typically a predetermined value.
It should be appreciated that the first value and the second value are typically different. Typically, the first value of the mode parameter corresponds to a first scan mode, which is typically a "fast" scan mode.
Typically, the mode attribute may have a second value corresponding to a second scan mode different from the first scan mode, and this is typically a "slow" scan mode.
One way in which particular mode parameter values may affect the scan mode is by changing the path of traversal of the beam used to acquire the composite image frame. Preferably, one or both of the first traversal path and the second traversal path are predetermined at least at or before the start of traversing a given frame. However, if the first traversal path is different from the second traversal path, it may be the case that: during this region for a given frame reversal (inverted), either or both of these first and second paths may themselves be changed by one or more of the mode parameter values switching (i.e., changing to and/or from the first and second values).
For example, it will be appreciated that switching between two preconfigured or predetermined preliminary paths may result in the redirection of the beam being time consuming or inefficient. In this case, some deviation and/or omission and/or change from either or both of the preconfigured or predetermined portions may be allowed or implemented in order to define the actual first path and/or second path traversed.
The first and second pluralities of locations within the region generally coincide with or lie along the first and second traversal paths, respectively. One or more (or in some cases all) of the locations included by either or both of the first and second pluralities of locations may coincide with both the first and second traversal paths of the given composite image frame.
In some embodiments, the values of the mode parameters are configured according to whether the configured field of view is changed or unchanged. Preferably, the mode parameter is configured to have a first value or a second value, respectively, depending on whether the configured field of view is changed or unchanged.
It may be particularly advantageous to initiate or otherwise cause the fast scan mode to occur with the configured field of view in a changed state. Thus, in some preferred embodiments, the mode parameter is configured to have a first value in response to a configured change in the field of view of the microscope. A parameter configured to have a first value may be understood as a parameter set to the first value, irrespective of the value of the parameter prior to the setting and whether the parameter actually has a value. In such embodiments, the fast scan mode may be automatically entered by changing the configured field of view.
Typically, the mode parameter has a first value when the configured microscope field of view changes, i.e. when it has a changed state.
The response is preferably immediate. In practice, however, some delay in the (responsive) setting of the response of the parameter values may be necessary or desirable, and thus the response may be after the field of view of the configuration placed in the changed state, for example by a possibly predetermined time or number of frames. Preferably, the mode parameter is maintained at the first value as long as, or at least as long as, the configured field of view remains changed, or until it ceases to change. For example, in embodiments in which the parameter is automatically set to the second value by the configured field of view becoming static or being static, switching of the parameter to both the first value and to the second value may be performed automatically in accordance with the configured field of view.
The mode parameter may be set to a first value in response to the configured field of view being in a changed state, e.g., for a predetermined time, or for a predetermined number of frames in the series.
Preferably, if the configured microscope field of view varies by more than a certain threshold value, the parameters may be configured to have a first value, for example, depending on the degree of similarity of the configured field of view to the previous frame field of view, and these similarities may include, for example, the degree of overlap on the areas between those configured fields of view, and/or the relative or absolute configuration positions on the sample, and/or the zoom level.
The field of view of a microscope being configured to be "changed" may be understood to mean that it is in a changed state, or in other words that it is configured to be changed in some way. For example, a change in configuration may include translating and/or zooming in or out on the sample to correspond to a smaller or larger region on the sample, respectively. It will also be appreciated that the parameter set to the first value may be responsive to the field of view of the configuration being in a changed state or both. For example, the first value may be set according to or in response to a single configured field of view movement.
As described above, in some embodiments, the mode parameter is configured to have a second value in response to the configured microscope field of view being unchanged. This response also does not have to be immediate, but can be delayed as set to the first value described above. The field of view configured by a microscope may be understood as the field of view being configured to be static or changing at a given time. Indeed, as discussed in more detail later in this disclosure, some variation in the actual microscope field of view may occur regardless of whether the configured field of view is changing. Thus, the actual field of view and the configured field of view may be different, for example, due to field of view offset.
The response may be a response to any change in the field of view of the configuration as described above being stopped. The second value may be set in response to the configured field of view being in a constant state for a predetermined time or for a plurality of frames in a series, for example.
Setting the mode parameter values in response to a changing or unchanged state of the configured field of view may allow for a change between scan modes during acquisition of a given composite image frame. That is, during acquisition of a given composite image frame in a series, the mode parameters may have the same value, or may vary between different values depending on the change in the state of the configured field of view (i.e., whether it is static or non-static).
However, in some embodiments, the state of the field of view configured by the microscope being changed or unchanged may correspond to two consecutive composite image frames or component image frames thereof in a series of fields of view having different or the same configurations, respectively. Typically, the composite image frame currently acquired at a given time is the second of the two consecutive frames, i.e., the later one. Making the change/unchanged determination in this manner may be useful in implementations where no intermediate frame acquisition mode change is expected or desired.
Thus, in some embodiments, particularly during acquisition of a composite image frame, the mode parameter has a first value when the configured microscope field of view is different from the microscope field of view of the immediately preceding composite image frame in the series. It should be appreciated that the parameter value may be adjusted to this value, for example, based on a condition relating to meeting the configured field of view, or the parameter value may be maintained at an appropriate value while continuing to meet the corresponding condition.
In the context of the present disclosure, the term "preceding" may be understood as occurring preceding in time, i.e. referring to a composite frame acquired before the current composite frame. In this context, the term "immediately" may be considered to mean that there is no intermediate frame between the current composite image being acquired and the immediately preceding composite image frame in the series. It should be appreciated that this does not preclude the acquisition of one or more composite image frames, or any other frames, images, signals or other data, between the current composite image frame and the immediately preceding composite image frame in the series. Any such other composite image frames or data may be captured if they are not part of the composite image frames of the second series. That is, while the series of composite image frames is preferably an uninterrupted series, as previously explained in this disclosure, and this may mean that no additional frames are acquired either currently or in between the acquisition of composite image frames in the series, it may be necessary in some embodiments to interrupt the designated function performed in acquiring successive image frames. For example, if there is a delay of one or more frames before the change in scan pattern is effected, the series may be a series that is interrupted in this way.
The above-described functionality may involve using a first scan pattern when a difference between successive fields of view occurs. In contrast, in some embodiments, the mode parameter has a second value when the configured microscope field of view is the same as the microscope field of view of the immediately preceding composite image frame in the series. As described above, this may mean that the parameter value may be adjusted to this value, for example, based on a condition relating to the satisfaction of the configured field of view, or the parameter value may be kept at an appropriate value while continuing to satisfy the respective condition. In the context of such an embodiment, the immediately preceding composite image frame is generally the same as the immediately preceding composite image frame referenced with respect to having the mode parameter have the first value. In some implementations, the automatic switching to the slow mode may be based on a determination that successive composite image frames have the same configured field of view.
In addition to automatic scan mode switching, the scan mode may also be controlled by the user. Thus, in some embodiments, the mode parameter is user configurable. In other words, the mode parameter may have a value that can be configured by the user. In some cases, it is particularly advantageous for a user of the microscope to be able to set the scanning mode to a slow mode. That is, the ability of a user to manually initiate a slow scan may be particularly beneficial when analyzing a sample and using the described live monitoring techniques. Thus, in some preferred environments, the mode parameter is set to the second value when the first user input is provided. The first user input may be by way of an input, such as by way of an input made by a computer-user interface. The input may include a command, key, or trigger for setting the mode parameter to a third value and thereby causing a slow scan. In some embodiments, the user input may be used to set the parameter to either or both of the first value and the second value. The ability to switch the parameter to the second value is at least preferably configurable. This may facilitate better control of live monitoring and guidance of the sample.
It is particularly advantageous to provide the user with the ability to initiate a slow scan in this way when implemented in combination with a configuration in which the mode parameter has a first value when the configured microscope field of view is different from that used for the immediately preceding composite image frame in the series, or when the configured field of view is otherwise in a changing state. In this way, the user can set the scan to a slower mode as needed at the time of guidance in order to improve the image data.
The user will typically stop the movement of the configured field of view when doing so. Thus, advantageously, the scan is then switched to the fast mode when the field of view movement begins again.
In addition to implementing a changeable scan pattern to facilitate switching between fast and high quality scans as needed during sample steering, the manner in which the desired frame data is processed may also be adjusted according to similar principles. That is, in some embodiments, acquiring the composite image frame further comprises: for each, preferably all, of at least a subset of the plurality of pixels comprised by the second image frame, if the second mode parameter has a second value: combining the set of derived values of pixels with the set of derived values of corresponding pixels of each of the one or more previous second image frames in the series to obtain a set of combined pixel values with increased signal-to-noise ratio and replacing the set of derived pixel values for the second image frame with the set of combined pixel values for use in the composite image frame; or if the second mode parameter has a first value: a set of derived values for pixels in the second image frame is maintained for use in the composite image frame.
Preferably, the plurality of pixels is the same as a set of pixels constituting the frame. However, they may be a subset thereof. The second mode parameter may be referred to as a frame processing mode parameter. The above-mentioned "corresponding pixel" generally refers to a pixel in a different frame that corresponds to the same location on the sample, i.e. has a value derived from particles emitted from the same location on the sample.
Thus, the second mode parameter may control whether the second image frame data is processed in a "refresh" mode or an "accumulate" mode, as discussed in more detail below. In some embodiments, the second mode parameter has a first value if the configured microscope field of view is different from the microscope field of view of the immediately preceding composite image frame in the series. While this "accumulate" and "refresh" mode function is preferably applied to the second image frame, in various embodiments, it may additionally or alternatively be similarly applied to the first image frame.
As with the first mode parameters previously described in this disclosure that may be used to control the traversal or scanning of the beam, the second mode parameters, which may control the mode of processing the acquired second image frame, may be configured to be adjusted according to a number of factors. In particular, manual control and automatic switching are conceivable.
Thus, in some circumstances, automatic switching of the second mode of frame processing (i.e., the accumulation mode) may be affected based on similarity or identity between the successively configured microscope fields of view in a series. In particular, the second mode parameter may have a second value when the configured microscope field of view is the same as the microscope field of view of the immediately preceding composite image frame in the series.
Additionally or alternatively, the second mode parameter may be user configurable for the automatic parameter setting to control frame processing. In particular, in some embodiments, user input may be used to set the parameter to either or both of the first value and the second value, it being preferred to provide at least the ability to switch to the second value. By providing the user with the ability to have acquired second image frames combined in order to improve the signal to noise ratio of the data therein, this may help to quickly obtain high quality data when the region of interest enters the field of view during live monitoring and steering of the sample. As described above, a mode parameter set to a value may be understood as a mode parameter configured to have the value. Thus, in some embodiments, the second mode parameter is set to a second value in response to user input. As with the user selectable scan mode, this is particularly advantageous in embodiments where switching to the "refresh" frame processing mode is automatically effected based on the state of the field of view configured by the microscope. In this way, the user can set the frame processing function to accumulate so as to improve the data when the sample guidance stops. It is therefore advantageous for frame processing to switch to a "refresh" mode in order to enable a quick update of the data to be presented when the field of view starts moving again.
The nature of the user input is generally the same as described above with respect to the scan mode.
Some embodiments relate to compensating for unintentional shifts in the actual field of view of a microscope that may occur. Without such correction, such a deviation between the actual field of view and the intended or configured field of view makes it difficult or impossible to meaningfully combine the values of the pixels of the different image frames in the series. This is because such unintended movement or displacement of the actual field of view typically results in pixel data representing signals from the same location on the sample being attributed to pixels having different locations in the two respective image frames.
It has been noted in this disclosure that in some cases the actual field of view of the microscope may not always be the same as the configured field of view. This may be due to, for example, thermal effects on the sample stage mechanism or beam deflection electronics. The acquisition of the composite image frame may further include: obtaining field of view deviation data representing a difference between an actual field of view of the microscope and a reference field of view; and for each of at least a subset of the plurality of pixels comprised by the second image frame, determining a corresponding pixel for each of the one or more previous second image frames in the series from the field of view deviation data, the set of derived values for the pixel being combined with the corresponding pixel to obtain a set of combined pixel values. It is particularly advantageous to apply this function when acquiring frames using the "accumulation" processing mode. Thus, if the second mode parameter has a second value, i.e. if the second mode parameter has a second value, the acquisition of the composite image frame preferably further comprises those steps. Preferably, the correction is performed only under this condition. However, it is not excluded to apply offset correction when this condition is not satisfied.
The "difference" may be a difference in linearity and/or area coverage and/or position of the sample between the fields of view. It may comprise a difference between the configured field of view of the microscope and the actual field of view, typically at a given time, and preferably at a time during acquisition of the composite image frame, in particular the second image frame. The field of view deviation data may be a measure of the difference, or a calculated, inferred or predicted indication, value or estimate of the difference. For example, it may be represented by a vector representation or an offset vector. The data may indicate the difference between the actual field of view and the configured field of view, or the offset may be indicated by a relative metric (e.g., with respect to other frames in the series). The field of view deviation data may be obtained from a plurality of obtained first image frames. It may indicate the field of view offset or be a measure or representation of the field of view offset. In some embodiments, the determination may be made by cross-correlation of successive first image frames, which in some embodiments are electronic images.
In some embodiments, the correction used in combining pixels is typically applied to at least a subset of the pixels in the second image frame, but preferably to all pixels in the second image frame, and may also be applied to the pixels of the first image frame if the first image frame data is to be combined in the "accumulation" processing mode. For example, the subsets of a given frame may be those subsets for which corresponding pixels may be found. As mentioned earlier in this disclosure, two pixels in different image frames are those pixels for which the corresponding pixel may be considered to correspond to the same location on the sample. However, for some pixels, the corresponding pixel may not be found, or may not be identified, such as in the case where peripheral pixels corresponding to a portion of the sample are unintentionally shifted into the actual field of view. It will be appreciated that in embodiments involving substantially simultaneous acquisition of a first image frame and a second image frame, unintentional movement of the sample relative to the beam will typically affect both the first image frame and the second image frame. Preferably, registration is maintained between the first image frame and the second image frame when the composite image frame is acquired. This may be achieved, for example, by offset correcting both the first image frame data and the second image frame data or only the second image frame data, for example, using the reference and/or measured first image frame data for the offset, and this may further comprise generating the composite image using the first image reference data. Accordingly, when acquiring the composite image frame, the above-described step for offset correcting the image frame data may be applied to either or both of the first image frame and the second image frame.
In general, offset correction may be employed to identify field of view offset or offset between frames, allowing for the identification of pixels in a different second or composite image frame corresponding to the same location on the sample and the combination of their value sets. Thus, preferably, a set of combined pixel values is obtained such that the corresponding pixel corresponds to or represents the monitored particle emitted from the same sample location as the current pixel.
Although the configured field of view and the actual field of view are preferably the same, any unintended offset in the position of the actual field of view may reduce the quality of the combined second image frame data by breaking the correspondence between pixels in different frames. Thus, such unintended offset is preferably corrected between successive image frames in order to ensure that pixel values are only combined if the data comes from the same location on the sample. Thus, in some embodiments, the reference field of view includes any one of a field of view of a configuration of the composite image frame being acquired and an actual field of view for a previous composite image frame in the series. The actual field of view is preferably the field of view of the microscope during acquisition of the composite image frame. Generally, in embodiments involving offset correction, when the second mode parameter has a second value, once the system begins to merge (integer) the frames, the data from the first composite image frame and typically its first image frame during the period of operation in "merge" mode may be used as an offset correction reference, preferably regardless of any delay before beginning the merge, so that the data from all subsequent frames are combined with the location for the offset correction. Thus, the reference field of view may comprise an actual or configured field of view of a previous composite image frame (typically one acquired when or after the second mode parameter is set to the second value).
In addition to, or instead of, using the offset data to establish correspondence between acquired frames for which the field of view has been offset, the data may be used to mitigate the offset itself. Thus, in some embodiments, acquiring the composite image frame further comprises: especially if the second mode parameter has a second value: the actual field of view is adjusted according to the field of view deviation data to reduce the difference between the actual field of view of the microscope and the reference field of view. In other words, acquisition conditions (e.g., beam deflection or possible stage position and/or movement) of subsequent frames may be adjusted to match, or at least more closely match, the actual field of view to the reference field of view.
In some embodiments, particularly where particles monitored by the second detector are available to derive data indicative of chemical elements present at the monitored sample location, processing is preferably performed to obtain characteristic line emissions. This is most preferably performed even for overlapping feature line distributions, and the process preferably excludes any bremsstrahlung contribution. In some embodiments, acquiring the composite image frame comprises processing spectral data (preferably X-ray spectral data) obtained from the particles of the second set in order to obtain data indicative of a number, which may be understood as a number, such as a count, of particles in the particles of the second set, respectively corresponding to the one or more characteristic line emissions, in order to derive a respective set of values of pixels comprised by the second image frame.
In the context of X-ray data, it will be appreciated that the particles of the second set comprise photons, i.e. X-ray photons. The processing of the spectral data may comprise processing one or more signals output by the second detector. As is well understood in the art, a characteristic line emission may refer to a set of X-ray transition lines corresponding to different transitions between states of a given chemical element.
By processing the X-ray spectral data to extract the number of photons corresponding to a particular characteristic line emission, even when the line emissions from two different elements extend over an energy range and overlap in energy, a respective set of values derived from the monitored particles of the second set can be derived. Thus, the processing preferably comprises extracting data indicative of the number of particles within one or more characteristic line emissions extending over the energy range and/or corresponding to overlapping energy ranges.
Preferably, in such an embodiment, the one or more sets of values each comprise X-ray energy spectrum data representable as a histogram, wherein the area of each rectangle represents the number of second particles having energy within an energy range corresponding to the width of the rectangle. It will be appreciated that in this case "rectangle" need not be so represented, but generally includes data suitable for being drawn as a rectangle on a histogram, for example a pair of values that can be visualized as a rectangle height and width, respectively. The one or more sets of values may each comprise a set of results of processing the histogram. The area of each rectangle (i.e. the product of the two values of the pair) may represent the number of second particles having an energy within an energy range corresponding to the width of the rectangle, in order to extract a set of values representing the number of second particles collected from the characteristic emission of the set of chemical elements.
The traversal path may be considered as a path over or along which the beam is scanned in order to obtain a frame covering the area.
In the first "fast" traversal mode, the faster total time to traverse the region with the beam may be implemented in a variety of ways, alone or in combination. For example, the total length of its traversal path in the first mode may be shorter than that of the second mode, so as to enable the beam to traverse it faster. Thus, in some embodiments, the length of the first traversal path is shorter than the length of the second traversal or path, such that the first total time is less than the second total time.
In other embodiments, the first and second traversal paths may have equal lengths, and the traversal time difference may be attributable to the first and second traversal conditions being different. Thus, in some embodiments, the first set of traversal conditions and the second set of traversal conditions are configured such that an average rate at which the beam traverses the first traversal path, particularly in accordance with the first set of traversal conditions, is faster than an average rate at which the beam traverses the second traversal path, particularly under the second set of traversal conditions, such that the first total time is less than the second total time. The average rate is typically the average rate. Thus, a rate related to the average rate of traversing the entire traversal path may be employed. In this way, a given traversal path may be traversed faster under the first set of conditions than under the second set of conditions.
In other words, the first and second conditions may be different and the first and second paths may be the same, or the first and second paths may be different and the first and second sets of conditions may be the same, or both the first and second paths and the first and second sets of conditions may be different. Advantageously, in any of these combinations, the first and second paths and the condition together may be such that the total traversal time for the first path and the condition is shorter than the total traversal time for the second path and the condition.
Generally, in such embodiments, the first set of traversal conditions and the second set of traversal conditions are configured such that a first linear density along the first traversal path is less than a second linear density on the second traversal path, the first linear density being a density of locations within an area where the first set of generated particles are configured to monitor, the second linear density being a density of locations within an area where the first set of generated particles are configured to monitor. Typically, this configuration is applied such that the average rate of traversing a first traversal or path under a first set of conditions is faster than for a second path and a second set of conditions.
The linear density as described above may be understood to refer to the corresponding average linear density of the entire length of a given traverse or path, or at least a portion thereof.
Typically, the linear density of locations relates to a (possibly discontinuous) portion corresponding to one or more scan lines in the traversal path, which may exclude, for example, portions of the path between scan lines in the raster pattern.
In general, line density may be considered to refer to the distribution density, metric or number of locations to be monitored per unit length of the traversal path. In some embodiments, such a difference in line density may also mean that the total number of locations to be monitored according to the conditions of the first set is less than the conditions for the second set. This may be the case, for example, where the first path length and the second path length are the same or similar.
Preferably, in an embodiment such as this, the first set of traversal conditions and the second set of traversal conditions are configured such that a first linear density of locations along the first traversal path where the second set of generated particles are configured to be monitored is less than a second linear density of locations along the second traversal path where the second set of generated particles are monitored within the area, in particular such that an average rate of traversal of the first traversal path under the first set of conditions is faster than an average rate of traversal of the second path under the second set of conditions. Thus, the rate difference may be achieved by the linear density of one or both of the first and second locations being different.
The difference in scan rate between the two modes can also be achieved by varying the time it takes to monitor the signal from the monitor location during the scan. Both the first set of traversal conditions and the second set of traversal conditions may be configured such that a first configuration of the first set of particles generated at each of the first plurality of locations along the first traversal or path is monitored for a duration of monitoring that is less than a second configuration of the first set of particles generated at each of the first plurality of locations along the second traversal or path. In some embodiments, the monitoring duration for each location of the set of monitored particles when the mode parameter has a first value is less than the monitoring duration for any location of the set of monitored particles when the mode parameter has a second value.
For example, the monitoring duration for each or at least some of the monitored locations under a given set of conditions may be the same or substantially the same.
Preferably, when the mode parameter has a first value, the average or mean monitoring duration of the monitored location is less than the average or mean monitoring duration when the mode parameter is a second value.
Likewise, the rate difference may be affected by either or both of the first set of particles and the second set of particles. Thus, in some embodiments, the first set of traversal conditions and the second set of traversal conditions are configured such that a first configuration of the monitored duration of particles of the second set generated at each of the second plurality of locations along the first traversal path is less than a second configuration of the monitored duration of particles of the second set generated at each of the second plurality of locations along the second traversal path.
In various embodiments, the first traversal path and the second traversal path may be the same or different for a given composite image frame. For example, the paths may be the same, and the rate at which the change path is traversed may thus constitute the rate at which the change region itself is traversed. Thus, in some embodiments, when the configured microscope field of view is the same as the microscope field of view for the immediately preceding composite image frame in the series, the average rate of the first traversal path over the beam traversal region is made faster than the average rate of the second traversal path over the beam traversal region.
However, in some embodiments, the traversal path taken in the two modes may be different for a given composite image frame. For example, if interlaced scanning is used to achieve scanning at a second average rate (i.e., in a "slow" or "static" mode) and non-interlaced scanning is used in a first "fast" or "dynamic" mode, the time taken for a single complete scan of the region by the beam may be the same for both the first and second modes. In this case, the time it takes the beam to traverse the entire second traversal path (where multiple scans are included, which in combination preferably cover the area) remains greater than the time it takes the beam to traverse the first traversal path, where only a single "pass" of the area is included, and may form part of the traversal path that covers, or at least substantially covers, the area.
In other words, the second traversal path may be longer than the first traversal path, for example, because of including more passes to the region. In these cases, the increase in the average traversal rate of the first mode as compared to the second mode may be due, at least in part, to an extension of the path in the second mode. In some embodiments, for example, the time required for a single pass through the region may be the same in both the first and second traversal modes or paths. However, in these cases, the requirement to traverse the second path more times than the first path is required to traverse the second path entirely, meaning that the total time required for the second path is greater and so the rate of traversal can be considered slower.
Throughout this disclosure, for any of the described embodiments in which the change in the traversal rate is defined in terms of the time it takes to traverse the region, it is also contemplated that the embodiment may generally be performed by instead defining the change in the traversal rate in terms of the time it takes to traverse the traversal path. Likewise, the change in the traversal rates of the first and second acquisition modes may be equivalently defined as a change in the total time required to scan the respective first and second traversal paths, or in some embodiments, as a change in the total time required to scan the region. The traversal time is referred to as the "total" time because it includes all of the time it takes to traverse the entire traversal path in, for example, a given pattern. As will be appreciated from the foregoing description and those throughout this disclosure, the acquisition of frames in the series may involve some interruption of traversal in one mode by switching to traversal in another mode, and thus the total time need not be the same as the actual time taken for the beam to traverse a portion of the sample in order to generate particles for acquiring frames.
Similarly, for any embodiment that describes a change in traversal rate, such a change may be understood as a change in the total time of configuration or total time that would be spent for the beam to traverse the entire traversal path. As such, the velocity need not be defined as the average velocity at which the beam passes a given distance over the sample surface during monitoring, although in some embodiments, the change in velocity may otherwise correspond to a change in such average velocity. Instead, the average traversal rate may be understood as a measure of the speed at which traversal of the entire path occurs, which may also correspond to or be derived from a measure of the time required to scan the entire traversal path.
Thus, during acquisition of a composite image frame, for some embodiments, step (a) may alternatively be understood to include: traversing the charged particle beam through a region of the sample, the region corresponding to a configured field of view of the microscope, wherein when the configured field of view of the microscope is different from the field of view of the microscope of the immediately preceding composite image frame in the series, traversing the beam through a first traversal path over the region at an average rate that is faster than an average rate of traversing the beam through a second traversal path over the region when the configured field of view of the microscope is the same as the field of view of the microscope of the immediately preceding composite image frame in the series.
Charged particles that are caused to traverse the region may be understood as a beam, typically an electron beam, that is caused to scan the region. That is, in the context of the present disclosure, the term "scanning" may be understood as traversing a beam across a surface, object, or portion of a sample.
During beam traversal of the region, and during processing of pixels of the second image frame, furthermore, typically during monitoring of the first set of generated particles and the second set of generated particles, the function depends on whether the configured microscope field of view is different from or the same as the field of view used for the immediately preceding composite image frame in the series. It should be appreciated that when the configured microscope field for the composite image frame currently being acquired is different from the microscope field of the immediately preceding composite image frame in the series, the functions applied in any or all of these portions of the composite image frame acquisition process are also typically applied in the absence of such an immediately preceding composite image frame. That is, for the first frame in a series, the method typically performs part or all of the composite image frame acquisition process, as if the configured microscope field of view of the current frame were different from the microscope field of view of the immediately preceding composite image frame in the series.
The particular manner in which the frame is acquired is typically based on a determination as to whether the field of view of the current frame is different from the field of view of the immediately preceding frame. This determination may be made, for example, based on the field of view of the microscope at the beginning of the process of acquiring a given composite image frame. In some embodiments, the determination may be based on generating particles at or monitoring the field of view of the particles from the first location and/or locations or when the beam first impinges the first location for the respective current and immediately preceding frames in the series. In some embodiments, this determination may be made or evaluated once, multiple times, or continuously in the acquisition of part or all of the composite image frame. This may advantageously allow the mode to be changed before the current frame is fully acquired, as will be described in more detail below.
Traversing the beam through the region in a reduced total time can be understood as the scan rate being changed depending on whether the field of view is the same or not, in particular in this case to be larger. It is understood that the reduced total time refers to a total traversal time of the first traversal path that is less than a total traversal time to traverse the beam through the second traversal path. In some embodiments, the reduced traversal time is achieved at least in part by traversing the beam through the first traversal path at an average rate that is faster than an average rate at which the beam traverses the second traversal path. Additionally and alternatively, however, the difference in traversal time may be achieved at least in part by using different traversal paths (e.g., a first traversal path having a shorter total length than a second traversal path).
It will be appreciated that the field of view change or difference may involve either or both of movement of the field of view on or relative to the sample surface and an increase or decrease in the size of the field of view. For example, such an increase or decrease may be understood as a change in the magnification level.
The traversal of the region by the beam may generally include any one or more of the following: such as the "raster" scan pattern, "interlace raster" and "serpentine" scan patterns depicted in fig. 1, as well as many other types of scan paths along which the focused beam may traverse the field of view. The above conditional application of different average traversal rates based on the changed configured fields of view may be conversely understood as traversing the beam through the region at an average rate slower than the average rate of traversing the beam through the region when the configured microscope field of view is the same as the configured microscope field of view for the immediately preceding image synthesis frame of the series. In the context of the present disclosure, the average rate at which a beam is traversed through the region may be understood or defined in terms of the configured or expected total time that would be employed to traverse the region at a given rate if the pattern were applied to acquisition of the entire frame. In some embodiments, the speed at which the intersection between the beam and the sample surface moves across the surface is not constant. For example, the scanning process may involve moving and stopping between discrete locations on the surface in order to monitor particles from the discrete locations on the surface. The "fast" and "slow" traversal modes or scan modes may correspond to relatively shorter and longer pixel dwell times, respectively. For each mode, these dwell time differences may be achieved for one, more or each monitored location, and the pixel values obtained correspondingly when operating in a given mode. In the scan mode, it takes more time to collect signals from traversal locations on the sample, and thus more data related to those locations, which advantageously provides a greater signal-to-noise ratio.
More generally, when operating in a given mode or at a given rate, the traversal speed may be configured to change throughout the scan. For example, during a raster scan, the beam preferably moves faster between the end of one line and the beginning of the next line than when traversing one line in the pattern.
The "fast" and "slow" average traversal rates are typically configured such that, while accounting for intermediate scan speed variations such as these, in some embodiments, the total time that would be required to traverse the entire region at a faster rate corresponding to the first mode is less than the total time that would be required to traverse the entire region at a slower rate when operating in the second mode in some embodiments. In addition, it should be appreciated that the traversal patterns of the first and second modes may be the same or different, in which case such a difference in the scan patterns may result in a difference in the average traversal rate.
The faster and slower traversal rates, respectively, that are conditionally applied depending on whether the field of view is changed or unchanged, may be defined according to a first traversal duration T 1 and a second traversal duration T 2, respectively, where T 1 is less than T 2. However, in general, the total time of those configurations need not be the same for any two composite image frames in a series. Instead, for a given frame, these times are preferably defined separately, such that the inequality described above applies for that frame, while for any two given frames in the series, each of T 1 and T 2 may be the same or different.
The traversal rate is typically set at the beginning of the acquisition of the composite image frame. During acquisition of each frame, the rate may not be explicitly set and may, for example, have a value, parameter, or configuration that is preserved or signaled, has been previously set, stored, or maintained from the time the previously synthesized image frame was acquired. Furthermore, as will be discussed in more detail below, the rate may also be adjusted during acquisition of a given composite image frame.
The number of positions from which the first set of particles and the second set of particles are monitored may remain constant or different for two given composite image frames in a series. In this way, the first plurality and the second plurality may each be different in number for each composite image frame acquisition process.
The set of values derived from the monitored particles generated at the second plurality of locations may be a set of one or more values. Preferably, however, the set comprises one or more values derived from those monitored particles. In a preferred embodiment, the pixels in the second image frame may comprise respective sets of values representing the spectrum of particles emitted from the second plurality of locations.
Typically, the second detector is an X-ray detector. Thus, the particles of the second set are X-ray photons and the set of values of the pixels in the second image frame represents an X-ray spectrum. In some embodiments, the set of derived values conditionally maintained in step (d) may be understood as the set of values derived from the monitored particles in step (c).
In order to increase the collection solid angle of the X-rays, an arrangement may be used in which an X-ray detector is provided at a position between the beam source and the sample. The X-ray detector may be equipped with one or more sensor portions facing the sample and at least partially surrounding the incident charged particle beam. In particular, in a preferred embodiment, the X-ray detector is mounted below a pole piece of a particle beam lens (e.g., an electron lens) of the microscope. In this context, the term "below" may be taken to mean closer to the sample than the pole pieces, relative to a position along an axis parallel to the beam, or relative to such an axis. Preferably, the X-ray detector is mounted directly below the pole piece. In this way, the solid angle at which the second detector receives X-rays can be increased in order to improve the signal-to-noise ratio of the X-ray signals. Sub-pole piece detectors that are close to the sample and/or at least partially surround the beam help to increase the collection solid angle in this way. Thus, preferably, the X-ray detector has one or more sensor portions facing the sample and at least partially surrounding the beam. The sensor portions typically comprise corresponding sensor surface portions, and these surface portions preferably face the sample, as they are positioned and/or oriented such that those surface portions face the sample. The portions at least partly surrounding the beam are understood to be those portions distributed around the beam. Thus, the sensor portion(s) may be present on all sides of the bundle and may be continuous, e.g. annular, or discontinuous. For example, the detector may be arranged in the form of a plurality of discrete sensor surfaces positioned at intervals around the beam. A sensor surface may be understood as an active surface adapted to receive an incident particle and to generate a signal output in response to the incident particle.
However, it should be understood that the sensor portion need not surround the beam in order to achieve an improved solid angle for signal collection. Other arrangements are conceivable in which the dimensions of the sensor services are combined with their intended working distance, which is the separation between the plane in which the sensor part or parts lie and the sample surface, in particular at the point of impact of the beam, to obtain an increased solid angle.
Whether or not the total X-ray detector surface includes one or more sensor portions, the size and/or shape of the total X-ray detector surface is preferably configured such that when the working distance (i.e., the spacing between the sensor plane and the beam spot, or the minimum or average spacing between any portion of the X-ray sensor surface and the beam spot) is less than or equal to 6mm, the total solid angle enclosed by the one or more X-ray sensor portions at the location where the beam impinges the sample is greater than 0.3 steradians. More preferably, the solid angle is greater than 0.4 steradians over this working distance range. Additionally or alternatively, to provide an improved X-ray signal, the X-ray sensor surface may have a total area of more than 10mm 2, preferably more than 20mm 2, more preferably also more than 30mm 2、40mm2 or 50mm 2.
In some embodiments, the "combination" function conditionally applied at step (d) may involve values of corresponding pixels of each of one or more previous second image frames of the series of which the microscope field of view is the same as the currently configured field of view, as described above. Whether a previous second image frame or a set of values of corresponding pixels of a plurality of previous second image frames is used typically depends on which or how many of the composite image frames have the same field of view as the currently configured field of view. Preferably, a derived set of values of all previously acquired composite image frames having the same field of view as the current composite image frame is used in the combining process. In some embodiments, these combinations may be applied cumulatively, e.g., as each composite image frame is acquired. The previous second image frames in the series mentioned in step (d) of the frame acquisition process, in which the microscope field of view is the same as the configured microscope field of view, may be understood as one or more second image frames that have been obtained as part of acquiring one or more corresponding composite image frames preceding the current composite image frame. The configured microscope field of view also referred to in this step can be understood as the current microscope field of view. In various embodiments, the currently configured microscope field of view may be defined as the field of view configured at the beginning or some predetermined point during acquisition of the current composite image frame. In some embodiments, it may be defined as an instantaneous configured field of view at a given time, for example, during the acquisition of a composite image frame. Thus, in such embodiments, the instantaneous field of view may vary throughout the acquisition of a composite image frame, and thus in some embodiments, the rate of traversal of the acquisition pattern or configuration may be changed in response to changes in the field of view occurring during the acquisition of a given composite image frame.
The function of step (d) is preferably performed for each pixel of the second image frame during a given composite image frame acquisition period. In this way, all pixels of the frame obtained in step (c) are subjected to a conditional "combining" or "refreshing" process, thereby maximizing enhancement of S/N and sample steering. However, in some embodiments, it is possible to omit one or more pixels of a given second image frame from this processing step. Thus, typically, the function of step (d) is performed for each of the plurality of pixels comprised by the second image frame, and not necessarily for each pixel comprised by the second image frame. In other words, in some embodiments, or for one or more frames in the series, the second image frame may include additional pixels in addition to the plurality of pixels. This applies similarly to embodiments in which conditional pixel value combinations are additionally applied to the first image frame, as well as to those embodiments involving per-pixel setting of acquisition mode parameters, which are described later in this disclosure.
Generally, combining the first image frame and the second image frame to produce a composite image frame includes: the first image frame is combined with a second image frame having pixels comprising sets of values that are those reserved sets of values derived during acquisition of the current composite image frame or those combined sets of values that replace the field of view of the immediately preceding composite image frame if they are different or the same, respectively.
Steps (b) and (c) are typically performed substantially simultaneously during acquisition of the composite image frame. This is to be understood as the entire process of monitoring the first set of particles and the entire process of monitoring the second set of particles taking place simultaneously or substantially simultaneously over the whole area. This substantial concurrency is such that the first image frame and the second image frame provide a first spatial representation and a second spatial representation of the region captured substantially simultaneously. However, the monitoring duration and timing of the respective positions of the first and second pluralities of particles may differ between the first and second detectors and the first and second sets of particles and between the positions at which the first and/or second sets of particles are detected. For example, different and/or variable sampling rates and pixel resolutions may be used. Thus, monitoring the collection of particles generated at each of the first and second pluralities of particles may or may not be simultaneous even where the overall collection of signals of the first and second detectors is simultaneous.
The method can be used to analyze samples in any charged particle beam instrument or instrument using a focused particle beam. Thus, in this disclosure, the term microscope is used to refer to any such instrument. Typically, the microscope is an electron microscope, wherein the charged particle beam is an electron beam. In other embodiments, the charged particle beam is an ion beam.
In addition, displaying the combined first image type and second image type in the composite image frame in real time as the series of composite images is acquired means: such actions can be performed "on the fly" seamlessly without pausing or interrupting the guidance of the sample by the user.
It will be understood that the term "particle" as used in this disclosure includes particles of matter, including ions and sub-atomic particles such as electrons, as well as particles representing quanta of electromagnetic radiation (i.e., photons, e.g., X-ray photons). For example, in some embodiments, the charged particle beam is an ion beam that generally causes resultant particles, including electrons and ions, to be emitted from the sample, which can be monitored by a detector.
The method is particularly advantageous in embodiments in which the second detector is of the type: which under given microscope conditions typically has a lower signal-to-noise ratio of the monitored signal than the signal for which the first detector is adapted to monitor. In such an embodiment, the combination of the first detector and the second detector may be selected such that the first detector provides a high signal to noise ratio image signal quickly in order to allow a user to quickly inspect different areas of the sample. In general, the second lower signal-to-noise detector may provide a second image frame of poorer quality than the first image frame obtained by the first detector of higher signal-to-noise ratio when steering around the sample by moving the field of view across its different areas or otherwise adjusting the field of view (e.g., by zooming in). However, when the user stops changing the field of view, for example by stopping adjusting the stage position or microscope conditions, in order to maintain a fixed field of view, repeated measurements of the same pixels or positions on the sample can be acquired by the detector, and thus, by combining pixel data from the second image frame with corresponding pixel data of a previously acquired second image frame, a lower signal-to-noise ratio of the image acquired using the second detector can be mitigated, and a higher quality image derived from the data acquired by the second detector can be obtained, wherein the previously acquired second image frame corresponds to the same position in the sample.
As explained earlier in this disclosure, the faster scan rate applied when the field of view is changed provides the advantage that the operator of the microscope can track the feature of interest more effectively while guiding the sample, as it can be used to display the resulting faster frame rate of the composite image frames. The signal-to-noise ratio (especially for the second detector) is typically lower at faster scan rates. Thus, in conventional techniques, it is not feasible to use such higher rates. However, the inventors have surprisingly found that when the field of view is changed rapidly, a greater benefit in compromising the signal-to-noise ratio is perceived to facilitate easier steering. This approach can achieve this advantage while also allowing a higher signal-to-noise ratio for the static field of view by switching between a faster scan rate and a slower scan rate.
In some embodiments, each image frame includes a plurality of pixels corresponding to a plurality of locations within the region and having values representative of the monitored particles generated at the plurality of locations within the region. For example, the pixel values may represent the intensities of particles detected by the detector and generated at the corresponding locations. Thus, in some embodiments, the composite image frame may provide, for each of the plurality of pixels, a representation of particles generated at respective locations within the region and monitored by each of the first detector and the second detector. In other embodiments, such as those in which the image frame is an electron backscatter diffraction image, the pixel values may not be directly representative of the particles generated at the location, but may be derived therefrom by way of calculation.
According to a second aspect of the present invention there is provided a method for analysing a sample in a microscope, the method comprising:
acquiring a series of composite image frames using a first detector and a second detector different from the first detector, wherein acquiring the composite image frames comprises:
a) Traversing a charged particle beam through a region of the sample, the region corresponding to a configured field of view of a microscope, wherein when the configured field of view of the microscope is different from the field of view of the microscope for an immediately preceding composite image frame of the series, the beam is traversed through a first traversal path over the region for a total time that is less than a total time for traversing the beam through a second traversal path over the region when the configured field of view of the microscope is the same as the field of view of the microscope for the immediately preceding composite image frame of the series,
B) Monitoring a first set of resulting particles generated within the sample at a first plurality of locations within the region using a first detector to obtain a first image frame comprising a plurality of pixels corresponding to the first plurality of locations and having values derived from the monitored particles generated at the first plurality of locations,
C) Monitoring a second set of resulting particles generated within the sample at a second plurality of locations within the region using a second detector to obtain a second image frame comprising a plurality of pixels corresponding to the second plurality of locations and having a respective set of values derived from the monitored particles generated at the second plurality of locations,
D) For each of a plurality of pixels comprised by the second image frame: if the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in the series: a set of derived values for pixels in the second image frame is maintained for use in the composite image frame; or if the configured microscope field of view is the same as the microscope field of view for the immediately preceding composite image frame in the series: combining the set of derived values of pixels with the set of derived values of corresponding pixels of each of the one or more previous second image frames of the series of microscope fields of view that are the same as the configured microscope fields of view to obtain a set of combined pixel values with increased signal-to-noise ratio, and replacing the set of derived pixel values with the set of combined pixel values in the second image frame for use in the composite image frame, and
E) Combining the first image frame and the second image frame to produce a composite image frame such that the composite image frame provides data derived from particles generated at the first plurality of locations and the second plurality of locations within the region and monitored by each of the first detector and the second detector; and displaying the series of composite image frames on a visual display in real time, wherein the visual display is updated to display each composite image frame in turn.
The presently described embodiments and advantageous features may be used in a method according to any of the first, second or third aspects described in this disclosure or specific examples thereof.
It will be appreciated that in the context of the second aspect, having the total time for the beam to traverse the first traversal path over the region is shorter than having the beam traverse the second traversal path over the region, which generally describes that the first total time required for the beam to traverse the entire first traversal path according to the first set of traversal conditions is shorter than the second total time required for the beam to traverse the entire second traversal path according to the second set of traversal conditions. In other words, the traversal may follow either of two paths and be based on either of two respective sets of traversal conditions.
Traversing a path generally refers to the movement of a beam or beam spot along at least a portion of a given traversed path. If the pattern is unchanged for the duration of acquiring the composite image frame, or more specifically for the duration of traversing the beam through the region in order to obtain the composite image frame, the entire first path or the entire second path is typically traversed. However, if the mode is switched during, for example, traversal of the region, neither the first path nor the second path need to be traversed in its entirety.
One way to increase the overall traversal rate of a region is to sample data from one or both detectors at fewer locations or less frequently during traversal. This may be considered to reduce the spatial resolution of the captured monitored particle data. By accelerating the acquisition of each composite image frame by acquiring fewer data samples to be collected for one or both of the first and second image frames (which generally corresponds to fewer pixels being generated in a given image frame), faster updating of the display may result. In this way, the series of composite image frames may be displayed at a higher frame rate. Thus, in some embodiments, traversing the beam through the first traversal path (or region in some embodiments) with a reduced total time or at a faster average rate, includes: when the configured microscope field of view is the same as the microscope field of view for the immediately preceding composite image frame in the series, the total number of positions in the region where the first set of generated particles are configured to be monitored is reduced to less than the total number of positions in the region where the first set of generated particles are configured to be monitored. Thus, the scan rate may vary depending on whether the field of view is the same as that used for the previous frame. The total number of locations within the region is typically an instantaneous, configured number, which may be considered as the expected total number of regions in a given pattern. Thus, it can be understood as similar to the overall rate of traversal, as it generally refers to traversal and monitoring of the entire area.
However, for example, if the frame acquisition mode is changed from the "fast" mode to the "slow" mode during acquisition of the frame, the actual number of positions need not be the same as the number of configurations, so that the overall number of configurations of positions to be monitored increases from a lower total number to a higher total number. It will be appreciated that in the case of such intermediate frame acquisition modes or rate changes, the actual average rate achieved by traversing and monitoring will typically be different from either of the configured "fast" and "slow" rates, and the actual total number of locations from which signals are collected in steps (b) and (c) will typically be different from either of the configured larger and smaller numbers. Typically, the actual achieved rate and number of positions will be an intermediate value between the two configuration values.
In view of this, it will be appreciated that configuring the plurality of locations to be smaller in number as may be performed in these embodiments may be considered as switching to a pattern in which the number of sampling locations or the temporal and/or spatial frequency of sampling is reduced. It should also be appreciated that the first set of generated particles is configured to be monitored in step (b).
In some embodiments, each location of the first and/or second plurality of locations may be defined by a limited area within the area. For example, each of a respective plurality of regions from which the first set of particles and the second set of particles are monitored may define the locations. Typically, when the "fast" acquisition mode is applied, each of the reduced plurality of positions is defined or corresponding by a larger area than the area defined or corresponding by the greater number of positions for the detector or collection of particles when the "slow" mode is applied, respectively. As explained above, making the number of traversal positions smaller than when the field of view is unchanged typically results in a reduced number of configured pixels in the fast traversal mode.
These embodiments involving the location and/or number of corresponding pixels being changed may be conversely understood as causing the beam to traverse the region at a slower average rate when the configured microscope field of view is the same as that of the immediately preceding composite image frame in the series, including: the total number of locations within the region where the first set of generated particles is configured to be monitored is increased to be greater than the total number of locations within the region where the first set of generated particles is configured to be monitored when the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in the series.
In addition, instead of applying such a change to the number of monitored positions in step (b), a similar function may be applied in step (c) during the acquisition of the frame. Thus, in some embodiments, traversing the beam through the region or first traversal path with reduced overall time or at a faster average rate includes: the total number of locations where the second set of generated particles within the region are configured to be monitored is reduced to less than the total number of locations where the second set of generated particles are configured to be monitored when the configured microscope field of view is the same time region as the microscope field of view for the immediately preceding composite image frame in the series.
Instead of or in addition to reducing the resolution or number of pixels, applying a different acquisition rate or pattern involves reducing the time taken to acquire data for each pixel. As the field of view changes, obtaining the derived value faster in "fast mode" allows the refresh rate to increase at the expense of signal-to-noise ratio. Conversely, in "slow mode" taking more time to obtain those pixel values when the field of view is unchanged improves the signal-to-noise ratio of the acquired data, as the refresh rate is less important when movement or other changes of the field of view cease during the steering of the sample. Thus, in some embodiments, traversing the beam through the region at a reduced total time or at a faster average rate includes reducing a configured monitoring duration, which may be understood as an average of the configuration of the entire region, and is typically applied instantaneously at a given time at which the first set of particles generated at each of the first plurality of locations is monitored. Also, as described above, traversing the beam through the region at a slower average rate may include increasing a configured monitoring duration during which the first set of particles generated at each of the first plurality of locations is monitored.
The monitoring duration may likewise vary for particles monitored by the second detector. That is, in some embodiments, traversing the beam through the region or the first traversal path with a reduced total time or at a faster average rate includes reducing a monitoring duration of a configuration in which the second set of particles generated at each of the second plurality of locations are monitored.
As described above, another way in which the change in total traversal time may be achieved is to change the length of the traversal path. In some embodiments, such variations involve adjusting the size, extent, or coverage of the traversal path.
The configured field of view of the microscope may be considered a "default" field of view, the microscope being configured to overlay at some point during acquisition of the composite image frame (typically at the beginning). However, it will be appreciated that during acquisition of frames in the series, the field of view actually used need not be the same as the field of view configured for some or all of the frames. That is, the microscope may be caused to capture images with a modified field of view, wherein the configured field of view still has been used to determine whether the field of view is nominally moving or stationary, and/or whether a "fast" acquisition mode or a "slow" acquisition mode is used. Thus, the area covered by or bounded by a given traversal path need not coincide with the field of view of the configuration of a given composite image frame, and may in fact correspond to only a portion of the field of view of the configuration of a given composite image frame. Relatedly, the configured field of view may be understood as being configured with respect to the composite image frame, rather than having to define its scope.
The configured field of view is typically user configured, but it is additionally possible to apply some kind of automation of the sample and/or beam deflection displacement in order to at least partially automate the guiding of the configured field of view around the sample.
Typical embodiments may involve having one or both of the first and second traversal paths cover or substantially cover the entire configured field of view of the microscope. However, it is also contemplated that as part of acquiring a composite image frame, the field of view may be modified so as to change the time required to scan it, or in other words, the time required to traverse the traversal path that covers it. The term "covering" as used in this context may be understood as "spread over (extend over)". Preferably, a traversal path that "covers" a field of view describes a path having a range and pattern configured such that particles can be monitored from all or substantially all portions of a sample within the field of view during the beam traversing the path. However, the "covering" may alternatively be understood as a path consistent with the field of view, i.e. defining an area and/or boundary consistent with the field of view.
For example, a "fast" or "dynamic" acquisition mode may include implementing a first traversal time and a corresponding first traversal path that are shorter than a second traversal time and a corresponding second traversal path, respectively. Thus, the first traversal path may have a smaller coverage area than the second traversal path. This may be understood as corresponding to, for example, the aforementioned "reduced raster" configuration, or any scan pattern covering a sub-area on the sample surface that is smaller, i.e. has a smaller area, than the area corresponding to the field of view of the configuration. In this way, in some embodiments, the first traversal path may cover a modified field of view, i.e., a field of view that is modified relative to the field of view of the microscope's configuration for a given frame. Typically in these embodiments, the modified field of view corresponding to the first traversal path is smaller than the configured field of view, and preferably is partially or fully contained within the configured field of view. Thus, the modified field of view may be understood as corresponding to only a portion or sub-region of the area of the sample to be covered by the configured field of view. Typically, an image frame captured when operating in such a "fast" mode thus represents a smaller area than an image frame captured in an alternative "slow" mode. For this reason, for the continuity of display size and magnification between frames in the series, the resulting smaller composite image frames in the series are preferably displayed on the visual display in correspondingly reduced sizes.
By covering only a modified field of view omitting a portion of the configured field of view of the frame acquired in the previous case, and preferably capturing data from the entire configured field of view in the latter case, an embodiment such as this may advantageously provide a faster image acquisition and display rate when the configured field of view is changed than when the configured field of view is unchanged.
In some embodiments, alternatively or additionally, the field of view applied when capturing frames may be increased for the purpose of operating in a "slow" or "static" mode. Thus, in certain embodiments, the second traversal path covers a modified field of view, which has a larger extent than, and preferably partially or fully encompasses, the configured field of view. In this way, the modified field of view may correspond to a traversal path that is longer than the configured field of view and requires a longer traversal time than the configured field of view.
The application of traversal or rate depending on whether the field of view is changed or unchanged may be achieved by obtaining a mode parameter. Thus, in some embodiments, acquiring the series of composite image frames is performed in accordance with an acquisition mode parameter, wherein when the acquisition mode parameter is equal to a first value, the beam is caused to traverse the first traversal path or region at an average rate that is faster than an average rate of traversing the second traversal path or region when the acquisition mode parameter is equal to a second value, and wherein the acquisition mode parameter is set to the first value if the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in the series, and the acquisition mode parameter is set to the second value if the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in the series. In contrast, in such embodiments, typically, when the acquisition mode parameter is equal to the second value, the beam is caused to traverse the region at an average rate that is slower than the average rate at which the beam is caused to traverse the region when the acquisition mode parameter is equal to the first value. Thus, this mode parameter may be used to indicate whether to apply a configuration of "fast" or "slow" acquisition modes corresponding to higher and lower average traversal rates, respectively. Thus, according to some embodiments, the acquisition mode parameter may be the same parameter as any one of the first mode parameter described in particular with respect to the first aspect and/or the second mode parameter described with respect to certain advantageous embodiments thereof. However, it is also contemplated that the acquisition mode parameters described in relation to embodiments of the second aspect may be separate and/or independent from the first mode parameters and the second mode parameters previously described. Also, the first value and the second value of each of the mode parameter described earlier and the acquisition mode parameter described later may be the same or different, respectively.
Typically, the value of the mode parameter is set or at least maintained at the beginning of the acquisition process for each composite image frame. In this way, the traversal rate may advantageously be changed at least on a per-frame basis. However, in some embodiments, the mode switching function is still more advantageously applied by adjusting the average traversal rate of the acquisition mode and the instantaneous configuration during acquisition of a composite image frame and in response to changes in the configured microscope field of view effected during acquisition of that frame. Thus, in some embodiments, step (d) is performed during the traversing of the beam through the region and the monitoring of the first and second sets of particles, and for each of the plurality of pixels included in the second image frame, step (d) further comprises: setting the acquisition mode parameter equal to the first value if the configured microscope field of view is different from the microscope field of view for the immediately preceding pixel of the second image frame and if the acquisition mode parameter is equal to the second value; or if the configured microscope field is the same as the microscope field for the immediately preceding pixel of the second image frame and if the acquisition mode parameter is equal to the first value, the acquisition mode parameter is set equal to the second value.
It should be appreciated that if the configured field of view changes in the middle of a frame, the region generally corresponding to the field of view may be considered to correspond to the field of view of the beginning of the frame being acquired, or the field of view at a given or predetermined point during acquisition. The field of view may also be considered to correspond to more than one field of view, or a combined field of view that contains portions of the sample surface that are included in any field of view through which the microscope is moved or configured to move or otherwise change during frame acquisition.
The application of the intermediate frame acquisition mode parameter switch may advantageously affect the traversal of the beam over a plurality of subsequent locations. Thus, the switching may affect the rate at which subsequent pixels in the second image frame are obtained. If the field of view is different from the field of view when the value set of the immediately preceding pixel in the frame was derived, it may be assessed as to whether the configured microscope field of view is different from the conditions for the field of view of the immediately preceding pixel with respect to the time at which the value set of the given pixel was derived or processed. For example, if the current pixel being processed or whose value set is acquired is the first pixel in the second image frame, then in this case the differently configured microscope field of view may include no immediately preceding pixel. This difference in field of view between pixels may be understood as the field of view that changes between processing and/or acquiring a previous pixel and processing and/or acquiring a current pixel. The acquisition mode parameter equal to the second value may be understood as an additional condition that the mode parameter is set to "slow" mode.
A pixel in the second frame that is defined as a previous or immediately preceding pixel generally refers to a pixel that precedes the current pixel in the order in which the pixel values were derived and/or processed. In general, pixels are processed in an order corresponding to the order in which signals are obtained from positions in an area. Based on these conditions, setting the acquisition mode parameter equal to the first value is typically performed in order to achieve an increase in the average traversal rate. Thus, it will be appreciated that the mode parameter change in such embodiments depends on both the field of view comparison of the configuration of the two pixels and the current mode parameter value.
Reference to the condition that the microscope field of view for the current pixel configuration is the same as the microscope field of view for the immediately preceding pixel of the second image frame, described above, may be understood as a condition that the movement or change of the field of view has stopped, and thus is unchanged between the preceding pixel and the current pixel being processed or acquired. In addition, setting the acquisition mode parameter equal to the second value also depends on the parameter still being generally set to "fast" mode. Setting the parameter equal to the second value is typically performed to achieve a reduced average traversal rate.
In such embodiments, the ability to change acquisition modes during the process of acquiring a single composite image frame may provide the benefit of slowing down beam traversal if the field of view moves or the change stops, resulting in higher signal-to-noise ratio data being obtained earlier than if the rate change were implemented only at the beginning of the next frame. This advantage is seen especially in situations where the field of view change is stopped shortly after acquisition of a composite image frame has begun in a "fast" or "dynamic" mode. In this case, it would be necessary to delay the higher resolution switching deflection data until substantially the entire frame acquisition duration data. Furthermore, in this case, even without such an intermediate frame switch, no signal-to-noise improvement will be achieved until after this delay, when a "slow" or "static" acquisition mode can be applied, starting from the next frame.
Likewise, if the field of view begins to change, an important advantage achieved by accelerating through intermediate frames is that the increase in refresh rate occurs earlier and results in easier and more efficient steering of the sample to the user or viewer. It will be appreciated that it may be beneficial in some embodiments to process the affected image frames, particularly when the mode parameter changes cause the number of pixels configured or the image frame resolution to change prior to acquiring the monitoring data of the composite image frame, for example, to approve the appearance and understandability of the frame when viewed by a user.
To achieve such image processing, a plurality of options for constructing a composite image frame for display, and various methods of processing data that has been acquired in the middle of traversal before a mode change, are available. For example, for a small number of pixels, data that has been acquired in a "dynamic" mode at low resolution may be interpolated at intermediate locations to provide equivalence over a grid of pixels having a higher resolution corresponding to the resolution of the image frames acquired in a "static" mode. In some embodiments, similar interpolation may be applied to only the synthesized image frame pixels.
In view of the above advantages, it will be appreciated that it is beneficial to immediately account for resound through calendar rate changes in response to intermediate frame acquisition transitions between changing and not changing the field of view. In this way, the advantageous effects can be achieved more quickly. Generally, in certain embodiments, the setting of the acquisition mode parameter to be equal to the first value or the second value is performed before monitoring particles generated within the sample at a location within the region corresponding to or represented by an immediately subsequent or immediately post-processed pixel in the second image frame. The substantivity of these mode changes is reflected in improved response to refresh rate scoring image resolution and/or signal-to-noise enhancement. It will be appreciated that in some embodiments, obtaining the first image frame may include some field-of-view dependent pixel processing. In particular, it may comprise a similar process to that applied to the second image frame, thereby increasing the signal-to-noise ratio by combining the data of the corresponding pixels in the image frames having the same field of view. Thus, in some embodiments, acquiring the composite image frame further comprises: for each of the plurality of pixels included by the first image frame: if the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in the series: maintaining derived values of pixels in the first image frame for use in synthesizing the image frame; or if the configured microscope field of view is the same as the microscope field of view of the immediately preceding composite image frame in the series: the derived values of the pixels are combined with the derived values of corresponding pixels of each of the one or more previous second image frames of the series of microscope fields of view that are the same as the configured microscope fields of view to obtain combined pixel values with increased signal-to-noise ratio and the combined pixel values in the first image frame are substituted for the derived pixel values for use in the composite image frame. As discussed above, the case where the configured microscope field of view is different from the microscope field of view of the immediately preceding composite image frame in the series may include the absence of the immediately preceding image frame, for example, if the current frame is the first in the series.
In addition to changing the acquisition mode to a "static" mode in order to improve the signal-to-noise ratio of the acquired data, in some embodiments, the process involves aggregating or merging groups of pixels (which are typically X-ray pixels) in the second image frame to form an image with fewer pixels to process and display. It will be appreciated that such a combined pixel will have an improved signal to noise ratio. Thus, in some embodiments, acquiring the composite image frame may further comprise: sets of pixel values for one or more subsets of pixels in the second image frame are grouped together to obtain one or more corresponding sets of aggregate pixel values or a "super-pixel" set of values. For example, each aggregate value set may preferably correspond one-to-one to a subset of pixels. In such embodiments, acquiring the composite image frame may include replacing each of the one or more subsets of pixels in the second image frame with aggregate pixels or "super-pixels" having a set of values equal to a respective set of aggregate pixel values or corresponding to a subset of pixels of the aggregate pixel values.
Typically, each of the first detector and the second detector views a region of the sample under or in accordance with a set of configured microscope conditions. In each of the obtained first and second image frames, each pixel may represent or have a value according to a count of particles monitored by the detector generated at a location on the sample corresponding to the pixel, or may for example indicate an energy distribution of those monitored particles or have a set of values indicating the energy distribution. In some embodiments, one or more pixels in either or both of the first and second image frames may have a set of values corresponding to histograms of photon energy obtained at corresponding locations, which may generally correspond to small areas on the sample surface. It should be appreciated that the number of values in each set may vary depending on the photon energy monitored.
In some embodiments, during acquisition of the composite image frame, a combination of pixel values of the respective second image frame and corresponding pixel data of a previously acquired second image frame corresponding to the same location in the sample may be automatically performed in accordance with a field of view being the same as a field of view of the previously acquired second image frame. If the field of view is intended to be stationary, that is to say the configured field of view is stationary, but there is some small shift in position, for example due to thermal effects on stage mechanical or beam deflection electronics, the position difference between successive image frames can be determined by cross-correlation of successive electronic images (as described for example at https:// en. Wikipedia. Org/wiki/digital_image_correlation_and_tracking) and measured shift, which is used to ensure that pixel values are combined only for successive frames where the data originates from the same position on the sample.
During the process of combining the pixel values of the second image frames, the same configured microscope conditions as the previous second image frames (which may be stored in, for example, a memory or any kind of machine readable medium) may be considered as the same content of the signals acquired for the pixels. It will also be appreciated that in embodiments where data corresponding to the second (or first) image frame is stored, for example for use in a subsequent frame in the series in the "accumulation" mode, it is not necessary to store pixel data for each position of the frame. For example, if the acceleration voltage of a focused, astigmatic, magnification, electron beam or other type of charged particle beam is not changed between acquiring the second image frame in question and the previous image frame, unless the sample or scanning position has been moved, the measurement of a pixel will typically constitute a repeated measurement of the previous pixel value of that position on the sample and thus can be used to improve the signal-to-noise ratio of that pixel. In other words, the same microscope conditions are configured to be considered as the same microscope conditions under which the second image frame was acquired as the microscope conditions under which the previous second image frame was acquired.
Displaying the composite image frame in real-time typically includes processing and displaying the image data once acquired so that it is virtually immediately available. In this way, the user can guide the guidance around the sample using the real-time synthesized image frames as feedback. For such live guidance, examples of methods of interacting with a user and suitable methods for composing, formatting and displaying a composite image frame are described in WO2019/016559A1, pages 8 to 10. Techniques such as those described in WO2012/110754A1 can be used to combine image frames into a color composite image. "real-time" display may be understood to mean substantially no perceptible delay between the user causing the navigational action and the action being presented on the visual display in the form of a moving image or video comprising the displayed composite image frame series.
For example, a user may cause a change in the field of view by altering the magnification such that the focused electron beam is deflected over a smaller or larger area on the sample. Alternatively, the user may move a stage or carriage on which the sample is supported such that the sample moves relative to the focused electron beam and thereby move the field of view accessed by the deflected electron beam to a new region of the sample surface. The field of view may also be changed by altering the electron beam deflection so that the focused electron beam is directed across different areas on the sample. Microscope conditions such as beam voltage can also be altered, which will change the contrast in the electronic image as well as the information content of the additional signal. In any of these cases, immediately replacing existing image data with newly acquired data will allow the user to see a new field of view within a single frame time. If the frame time is short enough, the user will be able to track features on the surface of the sample using the visual display unit while the field of view is changing.
If after any frame data acquisition the field of view or microscope conditions are the same as the previous frame, the acquisition mode is typically changed to a mode where the S/N of the displayed image is improved. This may be achieved by increasing the time spent accumulating the data of frames and/or by signal averaging or accumulating the data from consecutive frames. Thus, in some embodiments, if the user moves the field of view over the surface of the sample to find the region of interest, the user will be able to see a combination of the shape and form of the sample provided by the electronic image and the supplemental information about the composition or nature of the material provided by the additional signal. Once the region of interest is in view, the user can stop moving and the signal to noise ratio will improve quickly without any interruption of interaction or analysis period from the user.
The inventors have found that even if the additional signal gives a single frame of data with poor S/N, the image is often sufficient to give a rough location of the region of interest.
In addition, when successive frames are displayed while the field of view is changing, the noise in each frame is different, and the combination of eyes/brain achieves a time averaging effect that allows the user to identify moving features that would be blurred in the single frame data. Once the user sees the feature of interest, if they stop moving, automatic signal averaging can be started automatically so that the visibility of the feature will be improved quickly after recording a few frames.
The inventors have realized that the ability to see moving features in successive frames of noise data can be further exploited by increasing the average rate at which the beam is traversed through the region in order to increase the frame rate at which new composite images can be generated for display. Increasing the frame rate will reduce the effective acquisition time per pixel to monitor the second particle and degrade the S/N, but will enable faster moving features to be imaged without smearing and the rate can be optimized to allow the user to track the features effectively.
The inventors have also found that it is more difficult for the eye/brain to discern fine details in a moving image as the field of view changes. Thus, when the field of view changes, the displayed image may have a lower resolution (fewer pixels) without affecting the user's ability to track moving features. To generate a second image frame of lower resolution, the sets of pixel values of adjacent pixels may be aggregated or summed to give a set of pixel values corresponding to a single "super-pixel" representing a larger region on the sample. Thus, a second image frame may be prepared that covers the same area on the sample and uses "data merging" to prepare a reduced number of "super pixels". The same effect can be achieved by monitoring the second particle data while the electron beam traverses an area equivalent to the area covered by the "super pixel". Alternatively, the electron beam may be positioned at a series of grid points at coarser intervals to obtain monitored second particle data at fewer pixel locations. Since each set of pixel values may require significant computational cost to derive the values that will be used to generate the composite image for display, by reducing the number of pixels per frame, the overall computational time may be significantly reduced. Furthermore, for the same frame time, if the total acquisition time is effectively allocated between a smaller number of pixels, each set of pixel values will yield a derived value for a composite image having an improved S/N compared to an image having more pixels. Even though the number of pixels per frame may be reduced, the visual display image may be maintained at the same size by well known techniques such as pixel replication, interpolation or "upscaling" that map image data having a given pixel resolution onto visual displays having different pixel resolutions. Further, if the number of pixels in the second image frame is less than the number of pixels in the first image frame, the set of pixel values of the second image frame may be similarly increased by copying, interpolation or scaling up if necessary to provide the same number of pixel values as the first image frame, thereby facilitating the preparation of the composite image frame.
In order to achieve this step of functional improvement in terms of guidance efficiency where the user can make a decision "on the fly", an important advantage is: in embodiments with three or more detectors, a user may view two or more images simultaneously such that all images are at least within the user's peripheral visual range. Preferably, additional image information about the material composition or properties is provided as a color overlay on the electronic image to provide the equivalent of a "heads up" display that gives additional data without requiring the user to take the line of sight away from the electronic image.
As mentioned above, the first detector is typically an electronic detector. It is contemplated that other types of monitoring devices may be used.
In a typical embodiment, the first detector is adapted to monitor the resulting particles, which provide data comprising either or both of topographical information about the sample area and atomic number information of the sample material. Such data may typically be provided by a secondary electron detector or a backscattered electron detector. Such a detector may thus be adapted to quickly provide an image frame comprising information adapted to the user to quickly direct the field of view around the sample surface.
In some embodiments, the rate at which the resulting particles that the second detector is adapted to monitor are generated within the sample is less than one tenth of the rate at which the resulting particles that the first detector is adapted to monitor are generated within the sample for the configured microscope conditions. For example, when the method is used with an electron microscope, the resulting emitted X-rays generated in response to the electron beam impinging on the sample are typically generated at a rate that is an order of magnitude or more less than the rate at which the emitted electrons are generated under the given electron microscope conditions. In this context, the rate refers to the number of particles generated per second, including substances or electromagnetic radiation. In some embodiments, the rate at which the second detector is generated to monitor particles is one percent of the rate at which the first detector is generated to monitor particles.
In some embodiments, for example, involving electron back-scattering diffraction analysis, there may be no such difference in the first and second particle generation or monitoring rates. However, the S/N of the signal derived from the data of the second particles may still be significantly lower than the S/N of the signal from the data of the first particles.
In different embodiments, the second detector may be adapted to monitor different types of particles, e.g. X-rays, secondary electrons and backscattered electrons.
In some embodiments, the second detector is any one of an X-ray spectrometer, an electron diffraction pattern camera, an electron energy loss spectrometer, or a cathodoluminescence detector.
In some embodiments, monitoring the second set of particles to obtain a second image frame comprises: obtaining two or more different types of signals from a second detector to obtain sub-image frames corresponding to each of the signals, and combining the first image frame with the second image frame includes combining the first image frame with one or more of the sub-image frames.
Thus, in some embodiments, in order to obtain different types of information, the sub-image frame may be obtained by processing data from the second detector. For example, an X-ray spectrum providing a measurement of the number of photons recorded for each of a set of energy ranges may be processed to measure the number of photons corresponding to a particular characteristic line emission, even when the line emission propagates over some energy, such that recorded data from two different line emissions overlap in energy.
In some embodiments, the electron diffraction pattern recorded by the second detector (such as an imaging camera) may be processed to determine the crystalline phase of the material under the electron beam and the orientation of the phase, such that sub-images corresponding to different phases and different crystallographic orientations may be generated.
Thus, in some embodiments, multiple signals may be derived from the same detector. Typically, in such embodiments, the second detector may output two or more signals of different types, and these signals may correspond to different types of monitoring particles and may be used to obtain different sub-image frames. For example, different types of signals that may be output may include: a spectrum obtained by an X-ray spectrometer, an electron diffraction pattern obtained by an electron sensitive camera, and a spectrum obtained by an electron energy loss spectrometer or a cathodoluminescence detector. Any of these signal types may be used to derive either the first image frame or the second image frame, or may be used to derive the sub-image frame. Thus, in some embodiments, monitoring the second set of particles to obtain a second image frame comprises: two or more subsets of particles of the second set are monitored, each of the subsets corresponding to a different type of signal obtained from the second detector, to obtain sub-image frames corresponding to each of the subsets.
Some embodiments include a third detector of a different type than the first detector and the second detector. For example, each of the first detector, the second detector, and the third detector may be any one of a secondary electron detector, a backscattered electron detector, and an X-ray detector.
As described above, the pixels of the image frame may represent or have a value indicative of the energy distribution of the monitored particles. This may be achieved by obtaining two or more sub-image frames, which are subsets or components of the image frames and each corresponding to a different range of particle energies. Thus, in some embodiments, monitoring the second set of particles to obtain a second image frame comprises: monitoring two or more subsets of particles of a second set to obtain sub-image frames corresponding to each of the subsets, each of the subsets corresponding to an energy range of a different particle, wherein each sub-image frame comprises a plurality of pixels corresponding to a plurality of locations within the region and derived from the monitored particles included in the corresponding subset and generated at the plurality of locations within the region; and combining the sub-frames together to generate a second image frame such that the second image frame provides, for each of the plurality of pixels, data derived from particles generated at corresponding locations within the region and included by each of the subsets.
In this way, for each composite image frame, the second detector may obtain more than one associated image (sub-image frame) to individually monitor the resulting particles of different energies or different energy bands. The individual sub-images may be combined together in a manner that allows distinguishing between the pixel values or intensities of the plurality of constituent pixels of each of the sub-frames (which correspond to the particle counts of the corresponding sample locations). This may be achieved, for example, by assigning a different color to each sub-image frame or having a different color render each sub-image frame. This may be performed such that the visible contribution to the resulting color at a given location or pixel in the second image frame (and thus in the composite image frame) provides a visible indication of the intensity of the monitored particles in the corresponding energy band or subset generated by that location.
Thus, in some embodiments, a composite color second image frame based on the sub-image frames may be formed and then combined with the first image frame to form a composite image.
For example, in embodiments in which the second detector is an X-ray detector, for each composite frame in the series, the second detector monitors the intensity of the characteristic emission of the plurality of chemical elements by monitoring a plurality of subsets of particles whose energy ranges correspond to the characteristic energies or energy bands of the chemical elements. Thus, a plurality of sub-images are obtained from a single X-ray detector, each sub-image corresponding to a different chemical element.
In some embodiments, two or more sub-image frames are not combined to form a second image frame, but are processed separately according to step (d) of the method to form a composite image frame before being combined with the first image frame. Thus, in various embodiments, any sub-image frame or any image frame can be acquired in both "accumulate" and "refresh" modes.
Suitable methods for combining data from the first image frame and the second image frame to create and display a composite image frame are described in WO2019/016559A1, pages 15 to 17. For each composite image frame, the first image frame and the second image frame may be combined to form a single image frame containing data acquired by both the first detector and the second detector. This is preferably indicated visually to the user to allow individual discrimination of the first and second particle set information for each position in the image being displayed.
In some other embodiments, rather than overlapping two image frames, the combining of the first image frame and the second image frame is performed by displaying the first image frame and the second image frame side by side. Thus, combining the first image frame with the second image frame to generate the composite image frame may include juxtaposing the first image frame and the second image frame. Preferably, in such an embodiment, the two image frames are placed side by side with respect to each other such that when the composite image frames are displayed on the visual display, they are simultaneously visible within the field of view of the user. Thus, in these embodiments, the composite image frame will typically be at least twice as large, i.e. comprise at least twice as many pixels, as each of the respective first and second image frames.
The microscope conditions for obtaining the first image frame and the second image frame may include a number of different configurable conditions. Those conditions that may be configured for the electron column of an electron microscope may include magnification, focusing, astigmatism, acceleration voltage, beam current, and scan deflection. That is, the aforementioned list of microscope conditions may be configured for a charged particle beam. The position and orientation may be configured for the sample, or in particular for a sample stage adapted to support the sample. In other words, the spatial coordinates may include positions in the cartesian coordinate system on the X, Y and Z axes, as well as the inclination and rotation of the sample. The brightness and contrast may be configured for each of the first detector and the second detector.
Thus, the field of view for an electron microscope can generally be configured by configuring microscope conditions such as sample stage position and orientation, magnification, and scanning deflection as the degree of deflection applied to a scanning charged particle beam.
In some embodiments, the combination of pixels from an image frame may not necessarily be limited to only a second image frame. In some embodiments, the use of a "accumulation" mode to obtain image frames may be applied to the first image frame as well as the second image frame. That is, the acquisition of the composite image frame may further include: for each pixel of the first image frame, if the configured microscope conditions are the same as those of the stored first image frame of the immediately preceding acquired composite frame of the series, and if the respective pixel corresponds to a location within the region corresponding to the stored pixel comprised by the stored first image frame, combining the value of the stored pixel with the value of the pixel to increase the signal-to-noise ratio of the pixel. In embodiments where the signal-to-noise ratio of the signal from the first detector is low or below a desired threshold, it may be advantageous to apply a signal averaging or accumulation pattern of the acquired image frames to the image from the first detector.
The frame rate of the visual display, which is the rate at which successive composite images in the series are displayed, may vary between different embodiments and may be configurable. In some embodiments, the frame rate at which the composite image frames are displayed is at least 1 frame per second, preferably at least 3 frames per second, and more preferably 20 frames per second. In some embodiments, a single composite image frame is processed at any given time. In such embodiments, the example frame rates listed above correspond to a composite image acquisition time or processing time of 1 second or less, 0.3 seconds or less, and 0.05 seconds or less, respectively.
In some embodiments, the rate at which the series of composite image frames are acquired and displayed is at least 10 frames per second, preferably at least 18 frames per second, more preferably at least 25 frames per second, and even more preferably at least 50 frames per second. Thus, the series of composite image frames is preferably displayed in the form of moving images, preferably at a display frame rate equal to the video frame rate.
In a preferred embodiment, combining the stored pixels with the pixels is performed by means of signal averaging or signal accumulation to improve the signal-to-noise ratio of the pixels. The output from the detector can be considered a signal and thus a noise reduction technique of signal averaging and signal accumulation can be used, where the average or sum of a set of repeated measurements can be used, which is a set of measurements for a given pixel or pixel corresponding to a particular location within the area under the same conditions.
According to a third aspect of the present invention there is provided a method for analysing a sample in a microscope, the method comprising:
Obtaining a series of composite image frames using a first detector and a second detector different from the first detector using two acquisition modes, wherein acquiring data for the composite image frames in the first mode comprises:
a1 At time T1), traversing the charged particle beam through a region of the sample, said region corresponding to a field of view of a configuration of the microscope,
A2 Monitoring a set of resulting first particles generated within the sample using a first detector to obtain a first image frame comprising N1 pixels in which pixel values correspond to the monitored first particles from the vicinity of a location within the region,
A3 Monitoring a set of second resulting particles generated within the sample using the second detector to obtain a second image frame comprising N2 pixels, wherein a pixel has a set of values derived from the monitored second particles from a vicinity of a location within the region,
A4 If the configured microscope field of view is different from the microscope field of view for the immediately preceding composite image frame in each pixel in the second image frame, the value of the pixel is used as the value to be used for generating the next composite image frame in the series,
A5 If the configured microscope field of view is the same as the microscope field of view for the immediately preceding composite image frame in the series, changing to a second acquisition mode, wherein acquiring the composite image frame in the second mode comprises:
b1 At time T2, traversing the charged particle beam through a region of the sample, said region corresponding to a field of view of a configuration of the microscope,
B2 Monitoring a set of resulting first particles generated within the sample using a first detector to obtain a first image frame comprising M1 pixels in which pixel values correspond to the monitored first particles from a vicinity of a location within the region,
B3 Monitoring a set of second resulting particles generated within the sample using a second detector to obtain a second image frame comprising M2 pixels, wherein a pixel has a set of values derived from the monitored second particles from a vicinity of a location within the region,
B4 For each pixel of the second image frame, if the configured microscope field of view is the same as the microscope field of view of the immediately preceding composite image frame in the series, combining the set of values of the pixel with one or more sets of values of corresponding pixels in the previously acquired second image frame from the same field of view, so as to increase the signal-to-noise ratio of the values of corresponding pixels to be used to generate the next composite image frame in the series,
B5 If the configured microscope field of view changes from the microscope field of view for the immediately preceding composite image frame in the series, then changing to the first acquisition mode,
And
C) Generating a composite image frame using a set of pixel values of a second particle intended for generating a new composite image frame and pixel values for a first particle such that the composite image frame is a spatial representation of the region, wherein values of pixels at locations in the composite image frame are derived from data derived from particles generated at corresponding locations within the region and detected by each of the first and second detectors,
And displaying the series of composite image frames on a visual display in real time,
Wherein the visual display is updated to display each composite image frame in turn, so as to allow the viewer to identify potential features of interest when the field of view is static or changing,
Wherein the time T1 to traverse the region in the first mode is less than the time T2 to traverse the region in the second mode. The method may also be understood as being provided as an embodiment according to the second aspect. The first time T1 and the second time T2 may be understood, for example, to correspond to the total traversal time previously discussed in the present disclosure with respect to the second aspect. Thus, the mode switching function may be defined in terms of two discrete acquisition modes, typically involving switching between these modes based on field of view movement or other changes.
According to a fourth aspect of the present invention there is provided an apparatus for analysing a sample in a microscope, the apparatus comprising an X-ray detector, preferably a second detector according to any of the first, second and third aspects, a processor and a computer program which, when executed by the processor, causes the processor to perform a method according to any of the first, second and third aspects.
Such a device may be adapted to perform the method according to any of the first, second and third aspects.
In some embodiments, the apparatus is adapted to display signals generated when a focused electron beam in an electron microscope is scanned over a two-dimensional area on the surface of a sample, wherein a first signal is derived from the electron detector, wherein at least one auxiliary signal is derived from a different detector, said at least one auxiliary signal providing information about the content or material properties of a single chemical element instead of the atomic number, wherein each signal is measured at a two-dimensional array of electron beam locations covering the area, and a corresponding pixel array of the measurement results constitutes a digital image covering the field of view of the area, wherein the visual display is adapted to show a digital image of all signals such that the image is within the range of the user's surrounding vision, or is combined into a single composite color image, wherein a complete set of pixel measurements covering the field of view of all signals and the preparation of the visual display are performed and completed in a short period of time, wherein a complete set of pixel measurements covering all signals of the field of view and an update of the visual display are repeated continuously, wherein if the field of view or the condition is not changed, at least one pixel location of the same pixel location is measured at the same pixel location is changed, wherein the image is changed at a short period of time enough to improve the signal to be used to observe the signal at the same pixel location, and the same pixel location is changed at a short period of time, if the change is sufficient for a change of the image is performed.
In such embodiments, the signal-to-noise ratio of the display results of more than one measurement of the signal is typically improved by using a Kalman average of these measurements or by summing these measurements and altering the brightness magnitude according to the number of these measurements.
In this way, the kalman recursive filter can be used to improve the signal-to-noise ratio measured using multiple pixels when repeated measurements of the position on the pixel or sample are obtained in multiple consecutive second image frames in a series of composite image frames acquired by the device. In some embodiments, improvement of the image signal is achieved by adding together values of successive pixel measurements and adjusting the brightness in accordance with the number of measurements that the device adds the pixels together.
Typically, the short period of time is less than 1 second, preferably less than 0.3 seconds, desirably less than 0.05 seconds. Thus, the device may be configured to perform and complete the preparation of the visual display quickly enough that the device user experiences no or minimal delay.
The device may be configured to automatically identify when the field of view is changing to switch from an "averaging" or "accumulating" mode, in which successive frames in the series are added together, to a "refresh" mode. In some embodiments, if the sample is intentionally changed in the movement or scanning area under user control, the field of view is considered to be changing.
In some embodiments, changes in field of view or microscope conditions are detected by a mathematical comparison of the new digital image with an earlier acquired digital image. The device may be configured to compare successive frames in the acquired series to identify a change in the field of view. The apparatus may be configured to operate in a "refresh" mode for portions of the sample and in a "accumulate" mode for portions of the sample that remain within the field of view and move therein as the portions of the sample are introduced into the field of view as the user directs the field of view around the sample.
Typically, the auxiliary signal is derived from a spectrum obtained by an X-ray spectrometer, an electron diffraction pattern obtained by an electron sensitive camera, a spectrum obtained by an electron energy loss spectrometer or a cathodoluminescence detector.
In some embodiments, a scanning electron microscope comprising an apparatus according to the fourth aspect is provided. Thus, an electron beam instrument, in particular an electron microscope, adapted and/or configured to perform an advantageous analysis method may be provided.
According to a fifth aspect of the present invention there is provided a computer readable storage medium having stored thereon program code configured to perform the method according to any of the first, second and third aspects.
According to a sixth aspect of the present invention, there is provided a computer program comprising instructions which, when executed, cause an apparatus to perform the method according to any of the first, second and third aspects.
Drawings
Examples of the invention will now be described with reference to the accompanying drawings, in which:
FIG. 1 illustrates an example scan pattern for an electron beam traversing a region on a sample;
Fig. 2 is a schematic diagram showing the configuration of a scanning electron microscope system for recording an electron image and an X-ray image from a sample according to the related art;
FIG. 3 is a schematic diagram showing a scanning electron microscope arrangement in which a detector is located between the sample and the final lens pole piece of the microscope;
FIG. 4 is a flow chart illustrating an example method according to the present invention;
FIG. 5 illustrates an example composite image frame showing a region of a sample in which an electronic image and a color-coded X-ray image have been acquired by way of example of the present invention;
FIG. 6 is a screen shot illustrating functional elements of a visual display screen for user guidance according to an example of the invention;
FIG. 7 schematically illustrates a comparison between a configured field of view and corresponding traversal path of beam coverage in a static frame acquisition mode and a modified field of view and corresponding traversal path of beam coverage in a dynamic acquisition mode, according to an example of the invention; and
Fig. 8 is a flow chart illustrating steps of an example method according to the present invention.
Detailed Description
A method and apparatus for analyzing a sample in an electron microscope according to the present invention will now be described with reference to fig. 1 to 6 and 8.
A simplified representation of key steps of an exemplary method according to the present invention is shown in fig. 8. The flowchart shows the steps for capturing and displaying a single composite image frame in the series. If the mode parameter has a first value or a second value, respectively, the charged particle beam is traversed according to a first (fast) scanning mode (path on the left) or according to a second (slow) scanning mode (path on the right). Although not explicitly shown in this simplified view, the mode parameter values may be changed one or more times before the traversal of a given frame is completed. In this case, in this example, the actual path of the beam is switched between two paths and thus the respective conditions.
Regardless of the mode parameter values, two sets of particles (electrons and X-ray photons in this example) are monitored by respective first and second detectors. The frames acquired by the monitoring are combined to produce a composite image frame. The frame is then displayed on a visual display as part of a real-time update stream, video or series of frames, thereby facilitating analysis of the sample. The adaptive scan mode advantageously allows the scan mode to be changed between a mode that provides a fast visual response and a high display refresh rate, and a mode that can provide more, higher quality, and less noisy image data. If the user reconfigures the microscope to change the field of view (e.g., by moving the sample stage) or switches to a slower scan when the user has stopped commanding a change in the field of view, the mode parameters may be automatically changed to produce a fast frame rate.
Live imaging during analysis can be enhanced by sub-pole piece detectors with high solid angles. This arrangement is depicted in fig. 3. The ability of a user to interact with the visual display to quickly locate a region of interest on a sample is greatly enhanced if the second particle detector, which provides chemical or material information to enhance the first detector image, produces a signal with a high S/N. Conventional X-ray detectors mounted on the side ports of the SEM have a small collection solid angle for X-rays. However, an X-ray detector mounted below the pole piece of the electron lens (which has a sensor surrounding the incident electron beam and a sensing area facing the sample) can achieve a much higher total collection solid angle for all sensors. With a large collection solid angle, the S/N of the derived X-ray signal is much higher and an acceptable image for tracking moving features can be obtained with a much faster electron beam traversing the field of view area on the sample. This allows the frame update rate of the composite image frame to be faster as the field of view changes so that faster changes can be observed.
As shown in the flow chart of fig. 4, another example method may be performed using an electron microscope (such as the electron microscope of the arrangement shown in fig. 2 or 3), which advantageously further includes adaptive frame processing that depends on the configured field of view movement. The method involves acquiring a series of composite image frames and the acquisition of the composite image frames is illustrated by the steps in fig. 4. In this example, for each of the first "fast" or "dynamic" mode and the second "slow" or "static" mode, a composite image frame is acquired at a predetermined frequency. The frequency in the first mode is greater than the frequency in the second mode and thus the resulting display update occurs at a faster rate when operating in the first mode than in the second mode. In other examples, multiple predetermined frequencies or variable frequencies may be applied to either or both of the modes.
As shown in the flow chart, the mode applied depends on whether the configured microscope field of view is changing or static. In this example, a first mode or "mode 1" involves monitoring a first set of particles generated at N1 locations to obtain an image frame comprising N1 pixels, and monitoring a second set of particles generated at N2 locations to obtain an image frame comprising N2 pixels, where N1 and N2 are integers. Likewise, the second mode or "mode 2" involves monitoring the first set of particles produced at M1 locations to obtain an image frame comprising M1 pixels, and monitoring the second set of particles produced at M2 locations to obtain an image frame comprising M2 pixels, where M1 and M2 are integers. In order to achieve a faster overall scan rate in the first mode than in the second mode, in this example, N1 < M1, N2 < M2. However, in other examples, any of these inequalities may not be applied, and the number of positions of any of the first and second sets of particles being monitored may be unchanged between the first and second modes. In this example and other examples, an average time of a configuration spent monitoring particles from a given location in the first plurality of particles and the second plurality of particles may be less for the first mode than for the second mode. This may be achieved by a faster continuous scan of the monitored location or a shortened dwell time while in the "dynamic" mode.
Suitable values for the number of pixels and the number of positions in the above example are as follows. For example, mode 1 would have N1 equal to 49152 and N2 equal to 12288. The number of positions may be four times for switching to mode 2, with M1 equal to 196608 and M2 equal to 49152. In this example, N2 < N1 and M2 < M1, because the X-ray data is combined such that groups of 4 pixels are combined into one aggregate "superpixel" to improve S/N. If the image frames are acquired with a typical aspect ratio of 4:3, this will result in a first image frame with 256 x 192 pixels and a second image frame with 128 x 96 pixels in the first mode, and a first image frame with 512 x 384 pixels and a second image frame with 256 x 192 pixels in the second mode. N1 and M1 and N2 and M2 do not need to change the same factor, as the number of positions will depend on the use case, the acquisition conditions and the sample to be analyzed. In this sense, N1, M1, N2, and M2 may vary up to and possibly in excess of 4194304 (2048×2048), and may be as small as, but not limited to 3072 (64×48).
In this example, during acquisition of a frame, if it is determined that the configured field of view is different from the field of view of a previous frame in the series, the acquisition mode is switched from the first mode to the second mode, and if it is determined that the field of view is different from the field of view of the previous frame, the acquisition mode is switched from the second mode to the first mode. It is indicated in the flowchart that such a switch is made immediately so that the frame acquisition process is restarted in the switch mode. However, in various examples, the switching may be performed at different times during the acquisition period. For example, the handover may be performed after acquisition of the current frame has been completed. However, in a preferred example, the mode is switched once the condition of the mode is identified. In this case, the remaining traversing and monitoring process of the current frame is preferably performed in the switched mode, at least until another subsequent switch is made.
During acquisition of the frames, a user of the electron microscope system may cause the field of view of the microscope to cover different areas of the sample by moving the sample stage, and may periodically slow or stop the movement of the stage in order to accumulate second image frame data for a particular region of interest when the particular region of interest is found.
The electron beam of the electron microscope is caused to impinge on a plurality of locations within the sample region by deflecting the electron beam to perform raster scanning of the region.
A first detector is used to monitor a first set of particles generated at a plurality of locations within the sample as a result of the electron beam striking the plurality of locations to obtain a first image frame. A second set of resulting particles generated at a plurality of locations within the sample as a result of the electron beam striking the plurality of locations is monitored using a second detector to obtain a second image frame. The first detector and the second detector monitor respective signals derived from the first set of particles and the second set of particles for each location as the location is impacted by the electron beam. Thus, for a given frame, these electron and X-ray monitoring steps for the region are performed substantially simultaneously. The signals from each detector are used to generate an image formed by pixels arranged such that the relative positions of the pixels correspond to the relative positions within the region from which the monitored particles of the respective pixel values were generated.
In this example, for each pixel in the second image frame, if the configured microscope field of view is the same as the microscope field of view of the second image frame of the immediately previously acquired composite frame in the stored series, the stored pixel is combined with the pixel to increase the signal to noise ratio of the pixel. Thus, those portions of the second image frames that correspond to portions of the sample that were monitored under the same microscope conditions that were also present in the sequence in the previous second image frame are captured and propagated to the composite image frame in "accumulation" mode. Otherwise, if the fields of view are different, the pixel of the second image frame is captured in the "refresh" mode and is not combined with the stored pixel.
The first image frame is combined with the second image frame to generate a composite image frame by overlapping the two images with each other such that visual data from the two image frames can be distinguished independently and related to the relevant portion of the sample area.
Once the composite image frame has been generated, the composite image frame is displayed in real-time on a visual display. In this example, after the raster scan of the region is completed, the composite image frame of the region is displayed for 0.05 seconds.
When a series is acquired, the above steps are repeated for each composite image frame in the series.
In an electron microscope such as the arrangement shown in fig. 2, there are many signal sources that provide information about the composition or properties of the material. But the signal from the BSE detector in SEM (annular dark field detector in STEM) is affected by the atomic number of the atom, which does not reveal any information about the content of individual chemical elements nor can it uniquely identify the presence of a particular material under an incident electron beam. However, an electron sensitive imaging camera can record an electron diffraction pattern that shows the intensity of electrons as a function of angular direction. Analysis of such patterns may reveal properties of the crystalline material, such as orientation or presence of specific crystalline phases. If a thin sample is to be analyzed, an Electron Energy Loss Spectrometer (EELS) can be used to obtain an energy spectrum of electrons transmitted through the film, and the presence of core loss edges in the spectrum can reveal the presence of individual chemical elements, for example. Electron spectrometers can also be used to obtain spectra revealing Auger (Auger) emissions from bulk samples, which are characteristic of the content of a single chemical element. A light sensitive detector may reveal that the sample is a region of cathode ray luminescence (CL) and that the signal is affected by the electronic structure of the material. The X-ray signal of a characteristic emission line from a single chemical element can be obtained by using a crystal, diffraction grating or zone plate whose geometry causes the X-rays of the line energy to be selectively Bragg reflected to an X-ray sensitive sensor. All of these are examples of: wherein the signal provides additional information about the content or material properties of the respective chemical element, which may be a useful aid to the electronic image from SE or BSE, and may be used with the present invention. However, the following description applies to the specific case in which an X-ray spectrometer is used to provide additional information about the content of chemical elements.
In electron microscopes, there are typically one or more X-ray detectors and associated signal processors that are capable of recording the X-ray energy spectrum emitted by the sample. A histogram of photon energy measurements is recorded for a short time while the focused electron beam is deflected to a specific pixel location. The histogram is equivalent to a digital X-ray energy spectrum and the number of acquired photons corresponding to the characteristic X-ray emission of a particular chemical element can be derived from that spectrum and this gives a set of signal values corresponding to a set of chemical elements (one suitable method of processing the digital X-ray energy spectrum to minimize the effects of bremsstrahlung background and overlap and extract the characteristic line intensities is described in "Deconvolution and background subtraction by least-squares fitting with prefiltering of spectra",P.J.Statham,Analytical Chemistry 1977,49(14),2149-2154DOI:10.1021/ac50022a014). In addition, signals from electron detectors (such as secondary electron detectors or backscattered electron detectors) may be recorded at this location. Thus, if the electron beam is deflected to a set of pixel locations that make up a complete image frame, a set of pixel measurements corresponding to the digital electronic image and to one or more images corresponding to different chemical elements may be obtained. The data for these electronic and X-ray images are scaled appropriately, usually under the control of a computer, and transferred to a video display unit. Fig. 6 shows an example of a suitable display, wherein an electronic image is displayed at the upper left and one or more X-ray images corresponding to different chemical elements are displayed immediately to the right of the electronic image, so that they can be observed while the user is concentrating on the electronic image. To make it easier to view information simultaneously, the X-ray data from one or more chemical elements may be combined and displayed as a color overlay on an electronic image using techniques such as those described in PCT/GB2011/051060 or US5357110, and in fig. 6, the user may use a computer mouse to position a cursor within a box labeled "Layer Map" on the display and "click" to select an option to display X-ray information overlaid on the electronic image.
When a user wants to probe a sample to find a region of interest, the field of view needs to be moved and the method of processing and displaying the image needs to be changed to give real-time feedback to the user that helps them effectively probe the sample while changing the field of view.
The field of view may be changed in a number of ways. The microscope magnification can be increased by reducing the current supplied to the beam deflector coils (or the voltage supplied to the beam deflector plates) so that the size of the scanned area on the sample is reduced. An offset may be added to the deflection or to a set of additional deflectors used to shift the scanned area on the sample. The sample may be physically moved by moving the support or stage supporting the sample to a new position relative to the electron beam axis. In all of these examples, the obtained signal data will correspond to different fields of view on the sample. In addition, if the user changes the operating voltage of the microscope, then all signal content will change.
When the field of view is being changed, the user needs to see the results as soon as possible by replacing the values at the pixels with new results of signal measurements at the corresponding beam positions, so that the image is refreshed with each new frame of data. The high frame rate ensures that the image is refreshed fast enough for the user to decide whether to continue changing the field of view. Features must be visible in at least two consecutive frames to be tracked, so if the field of view is moving, the frame time limits the speed of the trackable object. If the frame refresh time exceeds 1 second, the user will not feel in control and will not be able to keep concentrating on their mind. With a frame refresh time of 0.3 seconds, the user can track the moving features well as long as the features move only a small fraction of the screen width, but the screen update is very noticeable. If the frame refresh time is less than 0.05 seconds, the screen update is hardly noticeable due to the user's vision persistence. However, S/N is affected at higher frame rates because noise in the image of a single frame will be worse when the dwell time per pixel is shorter. If the dwell time per pixel is increased to improve S/N, the frame time will also increase unless the number of pixels is reduced. However, reducing the number of pixels in a frame gives an image with less spatial resolution. Thus, the dwell time per pixel and the number of pixels per frame need to be optimized to suit the image signal source and the desired field of view movement speed.
A short frame refresh time is highly desirable when the field of view is moving because it makes it easier for the user to track the moving features and make decisions to navigate to different areas. However, when the user stops moving the field of view, the refreshed image may generate noise if a short frame time is used. Thus, there are conflicting requirements for optimal performance of the moving and stationary fields of view. To overcome this contradiction, we have changed the way data is used and switched from a "refresh" mode when the field of view is moving to an "average" mode when the field of view is stationary.
When the field of view is not moving, the new results obtained when the focused electron beam is now returned to a specific position are combined with the set of existing values in the corresponding pixel to improve the overall S/N ratio. The set of values may constitute an X-ray energy spectrum that is a histogram, where each "bin" represents the number of photons recorded over a small energy range, or it may be a set of results that process such a histogram to extract a set of values representing the number of photons collected from the characteristic emission of the set of chemical elements. The X-ray signal is typically the number of photons recorded in a pixel dwell time, and for each value in the set of values for a particular pixel, a new count may simply be added to the existing count, such that the pixel value represents the total count accumulated with each new data frame. For display, the total count is simply divided by the number of frames for which the "average" mode has been used so that the intensity remains constant, but the S/N is improved by reducing Poisson count noise. Alternative implementations may be used to provide any S/N improvement in signal value when the system is in the "average" mode. For example, a "Kalman" recursive filter for a particular pixel value may be described as follows:
Y(N)=A*S(N)+(1-A)*Y(N-1)
Where S (N) is the signal value of the nth incoming frame of image data, Y (N-1) is the previous value in the pixel, Y (N) is the new value for the pixel, and a is less than or equal to 1. If a=1, it is effectively equal to the "refresh" mode, but a smaller value of a provides an averaging effect that highly weights the weights of the nearest result and the previous frame, and the weights decay exponentially, so that the overall effect has a long persistence screen. However, starting from a certain point in time, the best noise reduction is obtained by changing a for each successive data frame such that a=1/N, which produces the same S/N drop as average weighting all frames.
The kalman recursive filter is a convenient way to achieve signal averaging with only a single stored image. However, if there is sufficient computer memory to store data from the N new image frames in a separate image memory, an alternative method of signal averaging may be used so that data from the nearest N series of image frames is always available for signal averaging calculations.
A key requirement to enable a seamless transition between the "refresh" and "average" modes is that the system is to know when the user moves the field of view. If the computer controlling the signal acquisition also knows that the user requests an adjustment of the field of view or the microscope conditions, it can then immediately decide which acquisition mode to use. Otherwise, the control computer must infer whether the field of view is changing. In this case, a first frame of the electronic image data is saved and each successive frame or partial frame of the electronic image data is compared with the first frame to see if it is different. Once a significant shift is detected (e.g., by observing a change in the magnitude or offset of the maximum of the cross-correlations of the two image regions), the system switches to "refresh" mode and will remain in that mode until two consecutive images do not exhibit a significant shift when the system reverts to "average" mode. Such a test is ideal if the user moves the sample stage under the beam such that a displacement of the field of view will certainly occur. This is also effective for detecting magnification variations between the two images, since this still generally produces variations in the maximum value of the cross-correlation result. Other tests may be used to detect changes in microscope conditions. For example, if brightness or contrast is altered, the centroid and standard deviation of the histogram of the digital image will change, as is the case when the electron beam energy is altered by changing the microscope acceleration voltage. In addition, the change in focal length can be detected by observing a change in the frequency distribution in the power spectrum of the digital image. Similar methods can be used to detect differences between X-ray images of specific chemical elements. Alternatively, an X-ray image may be generated that uses signals from the total X-ray spectrum recorded at each pixel so that the image has a better S/N than an image for a particular chemical element. The difference in the total X-ray spectral image can then be used to detect a change in field of view or condition. The sensitivity of these tests depends on the S/N of the image and the criteria for detecting the change need to be adjusted to give the best compromise between slow response to the change and false detection without change. Thus, whenever possible, the computer is preferably arranged to learn when the user has intentionally changed the scan area so that the correct acquisition mode can be selected without having to test for image differences.
In general, it is easier to detect whether the user is intentionally changing the field of view by the user sending an instruction to the SEM or from differences in images acquired in successive scans. However, even if the user does not intend to change the field of view, the field of view may also change due to, for example, mechanical or thermal relaxation effects on the sample stage carrier. Thus, one option is for the user to press a button to switch the acquisition mode to a mode in which successive data frames are signal averaged or accumulated. In this mode, any unintended offset may be corrected prior to combining the data frames. If there is no ability to determine if the user is intentionally changing the field of view, another option is for the user to default to operating in the fast acquisition mode and to have a "pause"/"resume" button that can be pressed to switch to the slower acquisition or accumulation mode to check for images with better S/N before, for example, resuming stage movement.
An example pattern of offset correction may be applied to the method as follows.
When the sample is offset, the beam position can be adjusted to follow the sample and continue to acquire data as long as the acquisition region remains within the field of view. Once the portion of the acquisition region reaches the edge of the field of view, the beam is no longer able to reach all pixels within the acquisition region and the data integrity is reduced.
If the size of the acquisition region is close to the size of the field of view, or if the acquisition region is positioned close to one or more edges of the field of view, the amount the sample may be offset in some directions before the sample reaches one or more boundaries of the field of view is quite limited. To increase the amount by which the sample can be offset before this occurs, the scan area for acquiring data needs to be reduced to a safe area near the center of the field of view. The security area may be defined using an extended field pattern.
When the extended field mode is selected, the maximum allowed offset is defined as a percentage of the image field width. In one example, available options include 50%, 150% and 350% of the image width. This percentage is the percentage of the image width that the sample can shift in one direction before the sample contacts the edge of the field of view (i.e., the area scanned by the electron beam) on the microscope. In order to allow the sample to shift by a defined amount, the image must be reduced accordingly. Thus, the higher the percentage selected, the smaller the image must be to allow it to be offset by the defined percentage. This means that images acquired after offset correction is established with the extended field mode will appear to be used for much smaller areas at higher magnification than images acquired before offset correction is established with the extended field mode.
For example, if the maximum offset is set to 150% of the field width, 25% of the center of the original field of view will be used.
In some cases, field of view reduction is not ideal if the field of view and acquisition region have been established before offset correction is established using the extended field mode. To avoid this, the "keep object size" option may be enabled.
When the hold object size option is selected, the images acquired after the offset correction is established appear the same as the images acquired before the establishment (i.e., the images have the same magnification and cover the same area of the sample). However, in order to allow the sample to shift before reaching the edge of the field of view (the area that can be scanned by the electron beam) on the microscope and to allow the image to move by the amount set in the "maximum shift" field, the field of view on the microscope must be increased. This is achieved in the context of varying magnification on the SEM.
For example, if the magnification on the SEM is initially set at 1000x and the maximum offset is set at 50% of the field width, then in the background the SEM magnification is set at about 500x.
Each time the field of view and microscope conditions are stationary, X-ray spectral data is acquired for each pixel, and in "averaging" mode, this data can be accumulated as successive frames of image data are combined to improve S/N. When a field of view change is introduced or detected, the acquisition will switch to a "refresh" mode, at which time the accumulated X-ray spectral data forms an X-ray "spectral image" in which each pixel has an associated X-ray energy spectrum for that pixel location. The sum of the spectra of all pixels in the field of view forms a single "sum spectrum" that can be processed to automatically identify ("Auto-ID") chemical elements from the characteristic emission peaks that appear in the spectra. The accuracy of the automatic identification can be improved by using the techniques described in patent application PCT/GB2014/051555 for pulse pile-up effect correction and spectroscopy. As in PCT/GB2014/051555, clustering techniques may also be used to identify groups of pixels with similar spectra, and analysis of the sum of all spectra from a group of similar pixels may be used to find matching entries in a spectral library, or the summed spectra analyzed for quantitative elemental composition, which may be used to match a library of compositions of known compounds, so that the compounds may be identified. Thus, at a point just before the modification of the field of view, an X-ray spectral image can be obtained from the current field of view, and chemical elements or even compounds can be detected within the field of view. If the field of view is controlled by movement of the support or stage that supports the sample, the stage coordinates (e.g., X, Y, Z) will define the position of the field of view, while the extent of the fields in X and Y are defined by beam deflection. If beam deflection is used to offset the field of view from the center position, there will be additional coordinates defining the beam deflection. The combination of stage and beam coordinates and the size of the scanned area on the sample surface are stored in a database along with a list of detected elements or compounds, and if the storage space permits, the entire X-ray spectral image of the field of view is saved in the database.
X-ray data typically has a poor S/N compared to electronic signal data, and it may be beneficial to sacrifice some spatial resolution of the X-ray data to achieve a better S/N. For example, the X-ray images may be "merged" in which data from each group of adjacent pixels is combined to give a single output pixel. Thus, the X-ray data may be converted internally into a pixel array with improved S/N, but each pixel corresponds to a larger area on the sample than the pixels on the resolution grid used for acquisition. Reducing the number of pixels of the internal array by merging also helps to reduce the time required to process the data to identify, for example, chemical elements and thus improve response time. When converting the combined X-ray data into an X-ray image for display, the X-ray image will have a lower spatial resolution than the electronic image, but will have reduced statistical noise to improve the identification of features. Reducing the resolution of an X-ray image visually minimizes the difference between the S/N ratio of the electronic image and one X-ray image. The selection of the X-ray image resolution may be adaptive when the field of view is stationary and data is accumulating, with the resolution of the X-ray image increasing as the number of accumulated frames increases and the S/N ratio increases. Even without the use of merging, the S/N of the displayed X-ray image can be improved by low-pass spatial filtering or "smoothing" at the cost of blurring the image details to some extent. Also, when the field of view is stationary, the degree of smoothness may be reduced as the number of accumulated frames increases.
The merging of the X-ray data may be achieved by a combination of electronic and software calculation methods. For example, when scanning a beam over a conventional progressive grid raster pattern, instead of saving a set of values for each pixel along a row in a stored image, X-ray data may be continuously accumulated while the beam is moved to 4 successive locations to collect electronic image data, and then a single set of values is stored at every 4 pixels along the row, the single set of values corresponding to gathering spectra from 4 individual locations on a row. If pixel data at the same location along a row is summed for a series of 4 consecutive rows, the result is a single set of values representing the sum of X-ray data at the beam locations of a 4X 4 array.
When using an X-ray detection system, the rate at which individual X-ray photons can be measured is limited due to poisson arrival time, and thus the ratio of output count rate OCR to input count rate ICR decreases. The ICR may vary from material to material as the user moves a larger area over the sample. If the beam current is too high, ICR on some materials may cause the detection system to saturate and produce lower OCR than materials with lower ICR. Therefore, beam current needs to be established to avoid saturation. A vision tool that shows when the pulse processor is overloaded is useful for properly setting the beam current so that the chemical element content does not present any anomalies when probing the area. FIG. 5 shows an example of a display generated by monitoring ICR and OCR at all locations within the field of view of an area on a sample. The electronic image is displayed in a single color, but when the OCR/ICR falls within a certain range, the image is encoded in color. For example, if OCR/ICR < 0.3, the color may be red, if 0.5 < OCR/ICR < 0.3, the color may be amber. Using this display, the user can adjust the beam current to ensure that there are no "red" areas in a typical field of view, thereby ensuring that there is no overload of electronics in some areas, even though the average OCR/ICR of the entire field of view may look safe.
In a typical example, a composite image frame display shows one or more images that cover the entire configured field of view in either "static" or "dynamic" mode. Even though the images are of the same size, the total time to traverse the field of view in "dynamic" mode to collect new data frames is less than in "static mode".
In another example, an alternative method for accelerating data acquisition in dynamic mode is employed, particularly by using a "reduced raster" as previously described. With this modified scan pattern, the electron beam traverses a sub-portion of the configured field of view on the sample and only that sub-portion of the configured field of view is shown on the composite image frame. Thus, the composite image frame has a modified field of view that is less than the field of view of the initial configuration of the frame. This concept is illustrated in fig. 7, where in dynamic mode, a smaller area is scanned and only features near the center of the field are seen in the composite image frame display, but the smaller area on the sample can be traversed in a shorter time and given a faster frame rate.
In this example, the magnification is the same in both the static mode and the dynamic mode, such that the features visible in the center region of the display do not change in size when switching from the static mode to the dynamic mode. However, in dynamic mode, only sub-portions of the region on the sample are shown in the composite image frame.

Claims (53)

1.一种用于在显微镜中分析样品的方法,所述方法包括:1. A method for analyzing a sample in a microscope, the method comprising: 使用第一检测器和不同于所述第一检测器的第二检测器获取一系列的合成图像帧,其中获取合成图像帧包括:Acquire a series of composite image frames using a first detector and a second detector different from the first detector, wherein acquiring the composite image frames comprises: a)使带电粒子束遍历样品的区域,所述区域对应于所述显微镜的配置的视场,其中:a) causing the charged particle beam to traverse an area of the sample, said area corresponding to a configured field of view of said microscope, wherein: 当模式参数具有第一值时,所述束的遍历沿着所述区域上的第一遍历路径并且根据第一集合的遍历条件,并且When the mode parameter has a first value, traversal of the bundle is along a first traversal path over the region and according to a first set of traversal conditions, and 当所述模式参数具有第二值时,所述束的所述遍历沿着所述区域上的第二遍历路径并且根据第二集合的遍历条件,When the mode parameter has a second value, the traversal of the bundle follows a second traversal path over the region and according to a second set of traversal conditions, 其中所述束根据第一集合的遍历条件遍历整个第一遍历路径所需的第一总时间小于所述束根据所述第二集合的遍历条件遍历整个第二遍历路径所需的第二总时间;wherein a first total time required for the bundle to traverse an entire first traversal path according to the first set of traversal conditions is less than a second total time required for the bundle to traverse an entire second traversal path according to the second set of traversal conditions; b)使用所述第一检测器监测在所述区域内的第一多个位置处在所述样品内生成的第一集合的所得粒子,以便获得第一图像帧,所述第一图像帧包括与所述第一多个位置相对应并且具有从所述第一多个位置处生成的被监测粒子导出的值的多个像素,b) monitoring, using the first detector, a first set of resulting particles generated within the sample at a first plurality of locations within the region to obtain a first image frame, the first image frame comprising a plurality of pixels corresponding to the first plurality of locations and having values derived from the monitored particles generated at the first plurality of locations, c)使用所述第二检测器监测在所述区域内的第二多个位置处在所述样品内生成的第二集合的所得粒子,以便获得第二图像帧,所述第二图像帧包括与所述第二多个位置相对应并且具有从所述第二多个位置处生成的被监测粒子导出的值的相应集合的多个像素,以及c) monitoring, using the second detector, a second set of resulting particles generated within the sample at a second plurality of locations within the region to obtain a second image frame, the second image frame comprising a plurality of pixels corresponding to the second plurality of locations and having a respective set of values derived from the monitored particles generated at the second plurality of locations, and d)组合所述第一图像帧和所述第二图像帧,以便产生所述合成图像帧,使得所述合成图像帧提供从在所述区域内的所述第一多个位置和所述第二多个位置处生成并且由所述第一检测器和所述第二检测器中的每一个监测到的粒子导出的数据;d) combining the first image frame and the second image frame to produce the composite image frame, such that the composite image frame provides data derived from particles generated at the first plurality of locations and the second plurality of locations within the region and monitored by each of the first detector and the second detector; 以及在视觉显示器上实时显示所述一系列的合成图像帧,其中所述视觉显示器被更新以依次显示每个合成图像帧。and displaying the series of composite image frames in real time on a visual display, wherein the visual display is updated to display each composite image frame in sequence. 2.根据权利要求1所述的方法,其中,根据所述配置的视场是改变还是不变来配置所述模式参数的值。2 . The method of claim 1 , wherein the value of the mode parameter is configured according to whether the configured field of view is changed or unchanged. 3.根据权利要求2所述的方法,其中,所述模式参数被配置为响应于配置的显微镜视场改变而具有所述第一值。3. The method of claim 2, wherein the mode parameter is configured to have the first value in response to a configured microscope field of view changing. 4.根据权利要求2或权利要求3所述的方法,其中,所述模式参数被配置为响应于所述配置的显微镜视场不变而具有所述第二值。4. A method according to claim 2 or claim 3, wherein the mode parameter is configured to have the second value in response to the configured microscope field of view being unchanged. 5.根据前述权利要求中任一项所述的方法,其中,当所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场不同时,所述模式参数具有所述第一值。5. A method according to any one of the preceding claims, wherein the mode parameter has the first value when the configured microscope field of view is different from the microscope field of view used for an immediately preceding synthetic image frame in the series. 6.根据前述权利要求中任一项所述的方法,其中,当所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同时,所述模式参数具有所述第二值。6. A method according to any one of the preceding claims, wherein the mode parameter has the second value when the configured microscope field of view is the same as the microscope field of view used for an immediately preceding synthetic image frame in the series. 7.根据前述权利要求中任一项所述的方法,其中,所述模式参数的值是用户可配置的。7. A method according to any preceding claim, wherein the value of the mode parameter is user configurable. 8.根据权利要求7所述的方法,其中,当提供第一用户输入时,所述模式参数被设定为所述第二值。8. The method of claim 7, wherein the mode parameter is set to the second value when a first user input is provided. 9.根据前述权利要求中任一项所述的方法,其中,获取合成图像帧还包括:9. The method according to any one of the preceding claims, wherein acquiring the composite image frame further comprises: 对于由所述第二图像帧包括的所述多个像素的至少一个子集中的每一个:For each of at least one subset of the plurality of pixels comprised by the second image frame: 如果第二模式参数具有第一值:If the second mode parameter has the first value: 则维持所述第二图像帧中的所述像素的导出值的集合以在所述合成图像帧中使用;或者,then maintaining a set of derived values for the pixels in the second image frame for use in the composite image frame; or, 如果所述第二模式参数具有第二值:If the second mode parameter has a second value: 则将所述像素的导出值的集合与所述一系列中的一个或多个先前第二图像帧中的每一个的对应像素的导出值的集合进行组合,以便获得具有增加的信噪比的组合像素值的集合,且用所述第二图像帧中的所述组合像素值的集合来替代导出像素值的集合以在所述合成图像帧中使用。The set of derived values of the pixels is then combined with the set of derived values of corresponding pixels in each of one or more previous second image frames in the series to obtain a set of combined pixel values having an increased signal-to-noise ratio, and the set of derived pixel values in the second image frame is replaced by the set of combined pixel values for use in the composite image frame. 10.根据权利要求9所述的方法,其中,如果所述配置的显微镜视场不同于用于所述一系列中的紧接在前的合成图像帧的显微镜视场,则所述第二模式参数具有所述第一值。10. The method of claim 9, wherein the second mode parameter has the first value if the configured microscope field of view is different from the microscope field of view used for an immediately preceding synthetic image frame in the series. 11.根据权利要求9或权利要求10所述的方法,其中,当所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同时,所述第二模式参数具有所述第二值。11. A method according to claim 9 or claim 10, wherein the second mode parameter has the second value when the configured microscope field of view is the same as the microscope field of view used for an immediately preceding synthetic image frame in the series. 12.根据权利要求9至11中任一项所述的方法,其中,所述第二模式参数的值是用户可配置的。12. A method according to any one of claims 9 to 11, wherein the value of the second mode parameter is user configurable. 13.根据权利要求12所述的方法,其中,响应于用户输入,将所述第二模式参数设定为所述第二值。13. The method of claim 12, wherein the second mode parameter is set to the second value in response to user input. 14.根据权利要求9至13中任一项所述的方法,其中,如果所述第二模式参数具有所述第二值,则所述获取合成图像帧还包括:获得表示所述显微镜的实际视场与参考视场之间的差异的视场偏差数据。14. The method according to any one of claims 9 to 13, wherein if the second mode parameter has the second value, acquiring the synthetic image frame further comprises obtaining field of view deviation data representing a difference between an actual field of view of the microscope and a reference field of view. 15.根据权利要求14所述的方法,其中,所述参考视场包括以下各项中的任一个:正在获取的合成图像帧的配置的视场;以及所述一系列中的先前的合成图像帧的实际视场。15. The method of claim 14, wherein the reference field of view comprises any of: a configured field of view of the synthetic image frame being acquired; and an actual field of view of a previous synthetic image frame in the series. 16.根据权利要求14或15所述的方法,其中如果所述第二模式参数具有所述第二值,则所述获取合成图像帧还包括:对于所述第二图像帧包括的所述多个像素的至少一个子集中的每一个,根据所述视场偏差数据,确定所述一系列中的一个或多个先前的第二图像帧中的每一个的对应像素,所述像素的所述导出值的集合与所述对应像素组合以获得组合像素值的集合。16. A method according to claim 14 or 15, wherein if the second mode parameter has the second value, the acquiring of the synthetic image frame further comprises: for each of at least a subset of the plurality of pixels included in the second image frame, determining, based on the field of view deviation data, a corresponding pixel of each of one or more previous second image frames in the series, and combining the set of derived values of the pixels with the corresponding pixels to obtain a set of combined pixel values. 17.根据权利要求14至16中任一项所述的方法,其中,如果所述第二模式参数具有所述第二值,则所述获取合成图像帧还包括:根据所述视场偏差数据来调整所述实际视场,以减小所述显微镜的实际视场与所述参考视场之间的差异。17. A method according to any one of claims 14 to 16, wherein, if the second mode parameter has the second value, the acquiring of the synthetic image frame further comprises: adjusting the actual field of view according to the field of view deviation data to reduce the difference between the actual field of view of the microscope and the reference field of view. 18.根据前述权利要求中任一项所述的方法,其中,获取合成图像帧还包括:处理根据所述第二集合的粒子获得的光谱数据,以获得指示所述第二集合的粒子中的分别与一个或多个特征线发射相对应的粒子的数量的数据,以便导出所述第二图像帧所包括的所述像素的值的相应集合。18. A method according to any one of the preceding claims, wherein acquiring a synthetic image frame further comprises: processing spectral data obtained based on particles of the second set to obtain data indicating the number of particles in the second set that respectively correspond to one or more characteristic line emissions, so as to derive a corresponding set of values of the pixels included in the second image frame. 19.根据权利要求18所述的方法,其中,所述处理包括:当所述一个或多个特征线发射在能量范围上扩展和/或对应于重叠的能量范围时,提取指示粒子的数量的数据。19. The method of claim 18, wherein the processing comprises extracting data indicative of a number of particles when the one or more characteristic line emissions extend over an energy range and/or correspond to overlapping energy ranges. 20.根据权利要求18或权利要求19所述的方法,其中,所述值的集合中的一个或多个各自包括处理直方图的结果的集合,其中,每个矩形的面积表示具有在与所述矩形的宽度相对应的能量范围内的能量的第二粒子的数量,以便提取表示从化学元素的集合的特征发射收集的第二粒子的数量的值的集合。20. A method according to claim 18 or claim 19, wherein one or more of the sets of values each comprises a set of results of processing a histogram, wherein the area of each rectangle represents the number of second particles having energy within an energy range corresponding to the width of the rectangle, so as to extract a set of values representing the number of second particles collected from the characteristic emission of the set of chemical elements. 21.根据前述权利要求中任一项所述的方法,其中,所述第一检测器是电子检测器。21. A method according to any preceding claim, wherein the first detector is an electronic detector. 22.根据前述权利要求中任一项所述的方法,其中,所述第二检测器是X射线检测器。22. A method according to any preceding claim, wherein the second detector is an X-ray detector. 23.根据权利要求22所述的方法,其中,所述X射线检测器设置在束源和所述样品之间,所述X射线检测器具有面向所述样品并且至少部分地围绕入射带电粒子束的一个或多个传感器部分。23. The method of claim 22, wherein the X-ray detector is disposed between a beam source and the sample, the X-ray detector having one or more sensor portions facing the sample and at least partially surrounding an incident charged particle beam. 24.根据前述权利要求中任一项所述的方法,其中,所述第一遍历路径的长度比所述第二遍历路径的长度短,使得所述第一总时间小于所述第二总时间。24. The method of any preceding claim, wherein a length of the first traversal path is shorter than a length of the second traversal path such that the first total time is less than the second total time. 25.根据前述权利要求中任一项所述的方法,其中,所述第一集合的遍历条件和所述第二集合的遍历条件被配置为使得所述束遍历所述第一遍历路径的平均速率比所述束遍历所述第二遍历路径的平均速率快,使得所述第一总时间小于所述第二总时间。25. A method according to any of the preceding claims, wherein the traversal conditions of the first set and the traversal conditions of the second set are configured so that an average rate at which the bundle traverses the first traversal path is faster than an average rate at which the bundle traverses the second traversal path, so that the first total time is less than the second total time. 26.根据权利要求25所述的方法,其中,所述第一集合的遍历条件和所述第二集合的遍历条件被配置为使得:所述区域内的所述第一集合的产生的粒子被配置为被监测的位置的沿所述第一遍历路径的第一线性密度,小于所述区域内的所述第一集合的产生的粒子被配置为被监测的位置的沿所述第二遍历路径的第二线性密度。26. The method of claim 25, wherein the traversal condition of the first set and the traversal condition of the second set are configured such that: a first linear density along the first traversal path of the particles generated by the first set within the area is configured to be monitored, which is less than a second linear density along the second traversal path of the particles generated by the first set within the area is configured to be monitored. 27.根据权利要求25或权利要求26所述的方法,其中,所述第一集合的遍历条件和所述第二集合的遍历条件被配置为使得:所述区域内的所述第二集合的产生的粒子被配置为被监测的位置的沿所述第一遍历路径的第一线性密度,小于所述区域内的所述第二集合的产生的粒子被配置为被监测的位置的沿所述第二遍历路径的第二线性密度。27. A method according to claim 25 or claim 26, wherein the traversal condition of the first set and the traversal condition of the second set are configured so that: a first linear density of particles generated by the second set within the area is configured to be monitored along the first traversal path is less than a second linear density of particles generated by the second set within the area is configured to be monitored along the second traversal path. 28.根据权利要求25至27中任一项所述的方法,其中,所述第一集合的遍历条件和所述第二集合的遍历条件被配置为使得:在沿所述第一遍历路径的所述第一多个位置中的每一个处生成的所述第一集合的粒子被监测的第一配置的监测持续时间,小于在沿所述第二遍历路径的所述第一多个位置中的每一个处生成的所述第一集合的粒子被监测的第二配置的监测持续时间。28. A method according to any one of claims 25 to 27, wherein the traversal conditions of the first set and the traversal conditions of the second set are configured so that: the monitoring duration of a first configuration in which particles of the first set generated at each of the first plurality of positions along the first traversal path are monitored is less than the monitoring duration of a second configuration in which particles of the first set generated at each of the first plurality of positions along the second traversal path are monitored. 29.根据权利要求25至28中任一项所述的方法,其中,所述第一集合的遍历条件和所述第二集合的遍历条件被配置为使得:在沿所述第一遍历路径的所述第二多个位置中的每一个处生成的所述第二集合的粒子被监测的第一配置的监测持续时间,小于在沿所述第二遍历路径的所述第二多个位置中的每一个处生成的所述第二集合的粒子被监测的第二配置的监测持续时间。29. A method according to any one of claims 25 to 28, wherein the traversal conditions of the first set and the traversal conditions of the second set are configured so that: the monitoring duration of a first configuration in which particles of the second set generated at each of the second plurality of positions along the first traversal path are monitored is less than the monitoring duration of a second configuration in which particles of the second set generated at each of the second plurality of positions along the second traversal path are monitored. 30.一种用于在显微镜中分析样品的方法,所述方法包括:30. A method for analyzing a sample in a microscope, the method comprising: 使用第一检测器和不同于所述第一检测器的第二检测器获取一系列的合成图像帧,其中获取合成图像帧包括:Acquire a series of composite image frames using a first detector and a second detector different from the first detector, wherein acquiring the composite image frames comprises: a)使带电粒子束遍历样品的区域,所述区域对应于所述显微镜的配置的视场,其中,当配置的显微镜视场不同于用于所述一系列中的紧接在前的合成图像帧的显微镜视场时,使所述束在总时间内遍历所述区域上的第一遍历路径,所述总时间小于当所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同时使所述束遍历所述区域上的第二遍历路径的总时间,a) causing a charged particle beam to traverse an area of the sample corresponding to a configured field of view of the microscope, wherein, when the configured microscope field of view is different from the microscope field of view used for an immediately preceding synthetic image frame in the series, causing the beam to traverse a first traversal path over the area in a total time that is less than a total time for causing the beam to traverse a second traversal path over the area when the configured microscope field of view is the same as the microscope field of view used for an immediately preceding synthetic image frame in the series, b)使用所述第一检测器监测在所述区域内的第一多个位置处在所述样品内生成的第一集合的所得粒子,以便获得第一图像帧,所述第一图像帧包括与所述第一多个位置相对应并且具有从在所述第一多个位置处生成的所述被监测粒子导出的值的多个像素,b) monitoring, using the first detector, a first set of resulting particles generated within the sample at a first plurality of locations within the region to obtain a first image frame, the first image frame comprising a plurality of pixels corresponding to the first plurality of locations and having values derived from the monitored particles generated at the first plurality of locations, c)使用所述第二检测器监测在所述区域内的第二多个位置处在所述样品内生成的第二集合的所得粒子,以便获得第二图像帧,所述第二图像帧包括与所述第二多个位置相对应并且具有从在所述第二多个位置处生成的所述被监测粒子导出的值的相应集合的多个像素,c) monitoring, using the second detector, a second set of resulting particles generated within the sample at a second plurality of locations within the region to obtain a second image frame, the second image frame comprising a plurality of pixels corresponding to the second plurality of locations and having a respective set of values derived from the monitored particles generated at the second plurality of locations, d)对于由所述第二图像帧包括的所述多个像素中的每一个:d) for each of the plurality of pixels comprised by the second image frame: 如果所述配置的显微镜视场不同于用于所述一系列中的紧接在前的合成图像帧的显微镜视场:If the microscope field of view of the configuration is different from the microscope field of view used for the immediately preceding synthetic image frame in the series: 则维持所述第二图像帧中的所述像素的导出值的集合以在所述合成图像帧中使用;或then maintaining a set of derived values for said pixels in said second image frame for use in said composite image frame; or 如果所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同:If the microscope field of view of the configuration is the same as the microscope field of view used for the immediately preceding synthetic image frame in the series: 则将所述像素的所述导出值的集合与所述一系列中的显微镜视场与所述配置的显微镜视场相同的一个或多个先前的第二图像帧中的每一个的对应像素的导出值的集合进行组合,以便获得具有增加的信噪比的组合像素值的集合,且用所述第二图像帧中的所述组合像素值的集合来替代所述导出像素值的集合,以在所述合成图像帧中使用,以及then combining the set of derived values for the pixels with a set of derived values for corresponding pixels in each of one or more previous second image frames in the series whose microscope field of view is the same as the microscope field of view of the configuration so as to obtain a set of combined pixel values having an increased signal-to-noise ratio, and replacing the set of derived pixel values with the set of combined pixel values in the second image frames for use in the composite image frame, and e)组合所述第一图像帧和所述第二图像帧以产生所述合成图像帧,使得所述合成图像帧提供从在所述区域内的所述第一多个位置和所述第二多个位置处生成并且由所述第一检测器和所述第二检测器中的每一个监测到的粒子导出的数据;e) combining the first image frame and the second image frame to produce the composite image frame, such that the composite image frame provides data derived from particles generated at the first plurality of locations and the second plurality of locations within the region and monitored by each of the first detector and the second detector; 以及在视觉显示器上实时显示所述一系列的合成图像帧,其中所述视觉显示器被更新以依次显示每个合成图像帧。and displaying the series of composite image frames in real time on a visual display, wherein the visual display is updated to display each composite image frame in sequence. 31.根据前述权利要求中任一项所述的方法,其中所述第一遍历路径实质上覆盖所述显微镜的整个配置的视场。31. A method according to any of the preceding claims, wherein the first traversal path substantially covers the entire configured field of view of the microscope. 32.根据权利要求1至30中任一权利要求所述的方法,其中,所述第一遍历路径覆盖包含在所述配置的视场内的修改的视场。32. The method of any one of claims 1 to 30, wherein the first traversal path covers a modified field of view contained within the configured field of view. 33.根据前述权利要求中任一权利要求所述的方法,其中,所述第二遍历路径覆盖包含所述配置的视场的修改的视场。33. A method according to any preceding claim, wherein the second traversal path covers a modified field of view including the configured field of view. 34.根据前述权利要求中任一项所述的方法,其中,如果所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场不同,则将所述模式参数设定为所述第一值,并且如果所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同,则将所述模式参数设定为所述第二值。34. A method according to any of the preceding claims, wherein the mode parameter is set to the first value if the configured microscope field of view is different from the microscope field of view used for the immediately preceding synthetic image frame in the series, and wherein the mode parameter is set to the second value if the configured microscope field of view is the same as the microscope field of view used for the immediately preceding synthetic image frame in the series. 35.根据前述权利要求中任一项所述的方法,其中,所述获取合成图像帧还包括:35. The method of any preceding claim, wherein acquiring a composite image frame further comprises: 对于由所述第二图像帧包括的所述多个像素中的每一个:For each of the plurality of pixels included by the second image frame: 如果所述配置的显微镜视场不同于用于所述第二图像帧的紧接在前的像素的配置的显微镜视场,并且如果所述获取模式参数等于所述第二值,则将所述获取模式参数设定为等于所述第一值;或if the configured microscope field of view is different than the configured microscope field of view for an immediately preceding pixel of the second image frame, and if the acquisition mode parameter is equal to the second value, setting the acquisition mode parameter equal to the first value; or 如果所述配置的显微镜视场与所述第二图像帧的紧接在前的像素的配置的显微镜视场相同,并且如果所述获取模式参数等于所述第一值,则将所述获取模式参数设定为等于所述第二值。If the configured microscope field of view is the same as the configured microscope field of view of the immediately preceding pixel of the second image frame, and if the acquisition mode parameter is equal to the first value, setting the acquisition mode parameter equal to the second value. 36.根据权利要求35所述的方法,其中,在监测在所述区域内的与所述第二图像帧中的紧接在后的像素对应的位置处在所述样品内生成的粒子之前,执行将所述获取模式参数设定为等于所述第一值或所述第二值。36. The method of claim 35, wherein setting the acquisition mode parameter equal to the first value or the second value is performed prior to monitoring particles generated within the sample at a position within the region corresponding to an immediately subsequent pixel in the second image frame. 37.根据前述权利要求中任一项所述的方法,其中,获取合成图像帧还包括:37. The method of any preceding claim, wherein acquiring a composite image frame further comprises: 对于由所述第一图像帧包括的所述多个像素中的每一个:For each of the plurality of pixels included by the first image frame: 如果所述配置的显微镜视场不同于用于所述一系列中的紧接在前的合成图像帧的显微镜视场:If the microscope field of view of the configuration is different from the microscope field of view used for the immediately preceding synthetic image frame in the series: 则维持所述第一图像帧中的所述像素的导出值以在所述合成图像帧中使用;或then maintaining the derived value of the pixel in the first image frame for use in the composite image frame; or 如果所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同:If the microscope field of view of the configuration is the same as the microscope field of view used for the immediately preceding synthetic image frame in the series: 则将所述像素的导出值与所述一系列中的显微镜视场与所述配置的显微镜视场相同的一个或多个先前的第二图像帧中的每一个的对应像素的导出值进行组合,以便获得具有增加的信噪比的组合像素值,且用所述第一图像帧中的组合像素值替代所述导出像素值以在所述合成图像帧中使用。The derived value of the pixel is then combined with the derived value of the corresponding pixel in each of one or more previous second image frames in the series whose microscope field of view is the same as the microscope field of view of the configuration to obtain a combined pixel value with an increased signal-to-noise ratio, and the derived pixel value is replaced by the combined pixel value in the first image frame for use in the synthetic image frame. 38.根据前述权利要求中任一项所述的方法,其中,获取合成图像帧还包括:38. The method of any preceding claim, wherein acquiring a composite image frame further comprises: 将所述第二图像帧中的像素的一个或多个子集的所述像素值集合分组在一起,以便获得聚合像素值的一个或多个相应集合,grouping together the sets of pixel values for one or more subsets of pixels in the second image frame to obtain one or more respective sets of aggregate pixel values, 用具有一组值的聚合像素替代所述第二图像帧中的像素的所述一个或多个子集中的每一个,所述一组值等于所述聚合像素值的相应集合。Each of the one or more subsets of pixels in the second image frame is replaced with an aggregated pixel having a set of values that is equal to the corresponding set of aggregated pixel values. 39.根据前述权利要求中任一项所述的方法,其中,监测所述第二集合的粒子以便获得所述第二图像帧包括:从所述第二检测器导出不同类型的两个或更多个信号,以便获得与所述信号中的每个信号相对应的子图像帧,并且其中,组合所述第一图像帧和所述第二图像帧包括将所述第一图像帧与所述子图像帧中的一个或多个进行组合。39. A method according to any of the preceding claims, wherein monitoring the second set of particles to obtain the second image frame comprises: deriving two or more signals of different types from the second detector to obtain a sub-image frame corresponding to each of the signals, and wherein combining the first image frame and the second image frame comprises combining the first image frame with one or more of the sub-image frames. 40.根据前述权利要求中任一项所述的方法,其中,组合所述第一图像帧和所述第二图像帧以便产生所述合成图像帧包括:重叠所述第一图像帧和所述第二图像帧,使得所述合成图像帧包括多个像素,所述多个像素中的每一个对应于所述区域内的所述多个位置中的一个,并且提供从由所述第一集合和所述第二集合两者包括的并且在相应位置处生成的粒子导出的数据。40. A method according to any of the preceding claims, wherein combining the first image frame and the second image frame to produce the composite image frame includes: overlapping the first image frame and the second image frame so that the composite image frame includes a plurality of pixels, each of the plurality of pixels corresponding to one of the plurality of positions within the area, and providing data derived from particles included in both the first set and the second set and generated at the corresponding positions. 41.根据权利要求40所述的方法,其中,组合所述第一图像帧和所述第二图像帧包括计算所述合成图像像素的颜色,所述颜色基于所述第一图像帧和所述第二图像帧中的对应像素的强度。41. The method of claim 40, wherein combining the first image frame and the second image frame comprises calculating a color of the composite image pixel, the color being based on an intensity of a corresponding pixel in the first image frame and the second image frame. 42.根据前述权利要求中任一项所述的方法,其中,组合所述第一图像帧和所述第二图像帧以便产生所述合成图像帧包括:并置所述第一图像帧和所述第二图像帧。42. The method of any preceding claim, wherein combining the first image frame and the second image frame to produce the composite image frame comprises juxtaposing the first image frame and the second image frame. 43.根据前述权利要求中任一项所述的方法,其中,显微镜条件包括以下中的任一个:样品载物台位置和取向、放大率、焦距、像散、加速电压、束电流和被配置用于带电粒子束的扫描偏转以及被配置用于样品的位置和取向。43. A method according to any of the preceding claims, wherein the microscope conditions include any of the following: sample stage position and orientation, magnification, focus, astigmatism, acceleration voltage, beam current and scanning deflection configured for a charged particle beam and position and orientation configured for the sample. 44.根据前述权利要求中任一项所述的方法,其中,获取和显示所述一系列的合成图像帧的速率是每秒至少1帧,优选地是每秒至少3帧,更优选地是每秒至少20帧。44. A method according to any preceding claim, wherein the rate at which the series of composite image frames is acquired and displayed is at least 1 frame per second, preferably at least 3 frames per second, and more preferably at least 20 frames per second. 45.根据权利要求30至44中任一项所述的方法,其中,将所述存储的像素与所述像素组合以便增加所述像素的信噪比是通过以下方法来执行的:信号平均化或信号累积或卡尔曼递归滤波或对测量结果求和并根据测量结果的数量改变亮度缩放。45. A method according to any one of claims 30 to 44, wherein combining the stored pixels with the pixels in order to increase the signal-to-noise ratio of the pixels is performed by: signal averaging or signal accumulation or Kalman recursive filtering or summing the measurement results and changing the brightness scaling according to the number of measurement results. 46.一种用于在显微镜中分析样品的方法,所述方法包括:46. A method for analyzing a sample in a microscope, the method comprising: 使用两种获取模式以使用第一检测器和不同于所述第一检测器的第二检测器获得一系列的合成图像帧,其中在第一模式中获取用于合成图像帧的数据包括:Using two acquisition modes to obtain a series of composite image frames using a first detector and a second detector different from the first detector, wherein acquiring data for the composite image frames in the first mode includes: a1)在时间T1使带电粒子束遍历样品的区域,所述区域对应于所述显微镜的配置的视场,a1) causing the charged particle beam to traverse a region of the sample at time T1, said region corresponding to the configured field of view of said microscope, a2)使用所述第一检测器监测在所述样品内生成的所得第一粒子的集合,以便获得包括其中像素值与来自所述区域内的位置附近的监测到的第一粒子相对应的N1个像素的第一图像帧,a2) monitoring, using the first detector, a resulting set of first particles generated within the sample so as to obtain a first image frame comprising N1 pixels wherein pixel values correspond to monitored first particles from a vicinity of a position within the region, a3)使用所述第二检测器监测在所述样品内生成的第二所得粒子的集合,以便获得包括N2个像素的第二图像帧,其中像素具有从来自所述区域内的位置附近的监测到的第二粒子导出的值的集合,a3) monitoring a set of second resulting particles generated within the sample using the second detector to obtain a second image frame comprising N2 pixels, wherein the pixels have a set of values derived from monitored second particles from a vicinity of a position within the region, a4)如果配置的显微镜视场不同于用于所述第二图像帧中的每个像素中的紧接在前的合成图像帧的显微镜视场,则使用像素的值作为将用于生成所述一系列中的下一合成图像帧的值,a4) if the configured microscope field of view is different from the microscope field of view used for the immediately preceding synthetic image frame in each pixel in the second image frame, using the value of the pixel as the value to be used to generate the next synthetic image frame in the series, a5)如果所述配置的显微镜视场与用于所述一系列中的紧接在前的合成图像帧的显微镜视场相同,则改变到第二获取模式,其中,在所述第二模式下获取合成图像帧包括:a5) if the configured microscope field of view is the same as the microscope field of view used for an immediately preceding synthetic image frame in the series, changing to a second acquisition mode, wherein acquiring a synthetic image frame in the second mode comprises: b1)在时间T2使带电粒子束遍历样品的区域,所述区域对应于所述显微镜的配置的视场,b1) causing the charged particle beam to traverse a region of the sample at time T2, said region corresponding to the configured field of view of said microscope, b2)使用所述第一检测器监测在所述样品内生成的所得第一粒子的集合,以便获得包括其中像素值与来自所述区域内的位置附近的监测到的第一粒子相对应的M1个像素的第一图像帧,b2) monitoring, using the first detector, a resulting set of first particles generated within the sample so as to obtain a first image frame comprising M1 pixels wherein pixel values correspond to monitored first particles from a vicinity of a position within the region, b3)使用所述第二检测器监测在所述样品内生成的第二所得粒子的集合,以便获得包括M2个像素的第二图像帧,其中像素具有从来自所述区域内的位置附近的监测到的第二粒子导出的值的集合,b3) monitoring a set of second resulting particles generated within the sample using the second detector to obtain a second image frame comprising M2 pixels, wherein the pixels have a set of values derived from monitored second particles from the vicinity of the position within the region, b4)对于所述第二图像帧的每个像素,如果所述配置的显微镜视场与所述一系列中的紧接在前的合成图像帧的显微镜视场相同,则将所述像素的值的集合与来自相同视场的先前获取的第二图像帧中的对应像素的值的一个或多个集合进行组合,以便增大将用于生成所述一系列中的下一合成图像帧的对应像素的值的信噪比,b4) for each pixel of the second image frame, if the configured microscope field of view is the same as the microscope field of view of the immediately preceding synthetic image frame in the series, combining the set of values of the pixel with one or more sets of values of corresponding pixels in a previously acquired second image frame from the same field of view so as to increase the signal-to-noise ratio of the value of the corresponding pixel to be used to generate the next synthetic image frame in the series, b5)如果所述配置的显微镜视场从用于所述一系列中的紧接在前的合成图像帧的显微镜视场改变,则改变到所述第一获取模式,b5) changing to said first acquisition mode if said configured microscope field of view changes from the microscope field of view used for an immediately preceding synthetic image frame in said series, 并且,and, c)使用打算用于生成新的合成图像帧的第二粒子的像素值的集合和用于第一粒子的像素值来产生所述合成图像帧,使得所述合成图像帧是所述区域的空间表示,其中,所述合成图像帧中的位置处的像素的值是从自粒子导出的数据导出的,所述粒子是在所述区域内的对应位置处生成并且由所述第一检测器和所述第二检测器中的每一个监测到的,c) producing a new synthetic image frame using the set of pixel values for the second particles and the pixel values for the first particles intended for use in generating the synthetic image frame, such that the synthetic image frame is a spatial representation of the region, wherein the values of the pixels at the locations in the synthetic image frame are derived from data derived from particles generated at corresponding locations within the region and detected by each of the first and second detectors, 以及,在视觉显示器上实时显示所述一系列的合成图像帧,and, displaying the series of composite image frames on a visual display in real time, 其中,所述视觉显示器被更新以依次显示每个合成图像帧,以便允许观察者在所述视场是静态的或变化时识别潜在的关注特征,wherein the visual display is updated to display each composite image frame in sequence so as to allow an observer to identify potential features of interest when the field of view is static or changes, 其中,在所述第一模式下遍历所述区域的时间T1小于在所述第二模式下遍历所述区域的时间T2。The time T1 for traversing the area in the first mode is shorter than the time T2 for traversing the area in the second mode. 47.根据前述权利要求中任一项所述的方法,其中,所述第二检测器是X射线谱仪、电子衍射图案相机、电子能量损失谱仪或阴极发光检测器中的任一种。47. A method according to any preceding claim, wherein the second detector is any one of an X-ray spectrometer, an electron diffraction pattern camera, an electron energy loss spectrometer or a cathodoluminescence detector. 48.一种用于在显微镜中分析样品的设备,所述设备包括X射线检测器、处理器和计算机程序,所述计算机程序在由所述处理器执行时使所述处理器执行根据权利要求1至47中任一项所述的方法。48. An apparatus for analysing a sample in a microscope, the apparatus comprising an X-ray detector, a processor and a computer program which, when executed by the processor, causes the processor to perform a method according to any one of claims 1 to 47. 49.根据权利要求48所述的设备,其中,通过将新的数字图像与先前获取的数字图像进行数学比较来检测所述视场或配置的显微镜条件的变化。49. The apparatus of claim 48, wherein changes in the microscope conditions of the field of view or configuration are detected by mathematically comparing a new digital image with a previously acquired digital image. 50.根据权利要求48或权利要求49所述的设备,其中,所述第二图像帧包括从以下各项导出的数据:通过X射线谱仪获得的谱、通过对电子敏感的相机获得的电子衍射图案、通过电子能量损失谱仪或阴极发光检测器获得的谱。50. An apparatus according to claim 48 or claim 49, wherein the second image frame includes data derived from: a spectrum obtained by an X-ray spectrometer, an electron diffraction pattern obtained by an electron sensitive camera, a spectrum obtained by an electron energy loss spectrometer or a cathodoluminescence detector. 51.一种扫描电子显微镜,包括根据权利要求48至50中任一项所述的设备。51. A scanning electron microscope comprising an apparatus according to any one of claims 48 to 50. 52.一种计算机可读存储介质,其上存储有被配置用于执行根据权利要求1至47中任一项所述的方法的程序代码。52. A computer-readable storage medium having stored thereon program code configured to execute the method according to any one of claims 1 to 47. 53.一种包括指令的计算机程序,所述指令在被执行时使设备执行根据权利要求1至47中任一项所述的方法。53. A computer program comprising instructions which, when executed, cause an apparatus to perform a method according to any one of claims 1 to 47.
CN202280063613.2A 2021-07-23 2022-07-25 Improved guidance for electron microscopes Pending CN118020136A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB2110622.4 2021-07-23
GBGB2110846.9A GB202110846D0 (en) 2021-07-28 2021-07-28 Improved navigation for electron microscopy
GB2110846.9 2021-07-28
PCT/GB2022/051946 WO2023002226A1 (en) 2021-07-23 2022-07-25 Improved navigation for electron microscopy

Publications (1)

Publication Number Publication Date
CN118020136A true CN118020136A (en) 2024-05-10

Family

ID=77540978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280063613.2A Pending CN118020136A (en) 2021-07-23 2022-07-25 Improved guidance for electron microscopes

Country Status (2)

Country Link
CN (1) CN118020136A (en)
GB (1) GB202110846D0 (en)

Also Published As

Publication number Publication date
GB202110846D0 (en) 2021-09-08

Similar Documents

Publication Publication Date Title
EP3655985B1 (en) Improved navigation for electron microscopy
EP2530699B1 (en) Charged particle beam microscope and method of measurement employing same
US7598492B1 (en) Charged particle microscopy using super resolution
US20240339293A1 (en) Improved navigation for electron microscopy
US10964510B2 (en) Scanning electron microscope and image processing method
JP4003423B2 (en) Charged particle beam microscope and charged particle beam microscope method
JP5668056B2 (en) Scanning method
US8153967B2 (en) Method of generating particle beam images using a particle beam apparatus
CN118020136A (en) Improved guidance for electron microscopes
JPH10188883A (en) Energy analyzer
US9859092B2 (en) Particle beam microscope and method for operating a particle beam microscope
EP3648137A1 (en) Electron microscope and image processing method
US11626266B2 (en) Charged particle beam device
US20250232947A1 (en) Charged Particle Beam System
CN112577986A (en) EDX process
US9396906B2 (en) Transmission electron microscope and method of displaying TEM images
US20240241068A1 (en) Live chemical imaging with multiple detectors
JP2014116207A (en) Charged particle beam device
JP2000133195A (en) Transmission electron microscope
JP2015008038A (en) Charged particle beam apparatus
CN113203762A (en) Method of energy dispersing X-ray spectra, particle beam system and computer program product
JPH07153407A (en) Scanning charged particle beam device adjusting method and device
JP2000123772A (en) Method of displaying sample image in electron beam apparatus and electron beam apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: United Kingdom

Address after: Heiwickham, England

Applicant after: OXFORD INSTRUMENTS NANOTECHNOLOGY TOOLS Ltd.

Address before: Oxfordshire

Applicant before: OXFORD INSTRUMENTS NANOTECHNOLOGY TOOLS Ltd.

Country or region before: United Kingdom

CB02 Change of applicant information