Introduction
Over the years, significant advances have been made in SEM resolution largely attributable to hardware improvements, including higher brightness sources, better electron optics, and more efficient detectors. The underlying rationale is that reduced probe diameter leads to better resolution. These changes are significant, but they are generally accompanied with increased complexity and cost. While a small probe diameter may be a necessary condition for high resolution, it may not be sufficient for a number of reasons. First, although the probe diameter may be minimized at a particular point of impact, it will not be constant along the beam axis leading to depth of focus issues at very high magnification. Second, the signal measured may originate from some depth below the point of impact, and the actual excitation volume may have dimensions that greatly exceed the minimum probe diameter. This difficulty may be partially overcome by choosing a particular signal type like secondary electrons (SE) over backscattered electrons (BSE) at high accelerating voltages, however even SE images may have limitations if the mean-free path of the secondary electrons is significantly larger than the probe diameter. Gold on carbon is often the standard of choice for resolution tests because both the excitation volume and mean-free path of SEs can be small relative to the probe diameter, however this would not be the case for a fine carbon structure on gold. It is now well recognized that issues relating to excitation volume, particularly for BSEs, can be reduced by going to low beam energies, often 2 keV or less, and this is particularly important in the examination of biological samples and microelectronic devices. Low-voltage operation is, however, often accompanied by larger probe sizes and reduced probe currents. A third critical factor, therefore, is probe current. It must be large for a high signal-to-noise ratio (S/N). If the S/N is too low, fine details may be lost in the noise. Although the S/N of an image can be improved by increasing the image collection time, this can lead to decreased productivity as well as other problems including sample drift, contamination, and vibrations.
Experienced microscopists are well aware of the interplay of all of these factors and are constantly looking for new ways to optimize performance to meet specific needs, recognizing that some compromises must be made. This article describes a new software approach to improved SEM resolution.
Materials and Methods
Given the above considerations, our recent research has concentrated on the question of whether it is possible to develop computationally based procedures to improve resolution, productivity, and image quality when the probe size is larger than the pixel size and where the excitation volume is comparable or smaller than a desired pixel size [Reference Lifshin1]. This condition is a form of “oversampling” and results in blurry images at high magnifications.
Unfortunately, resolution is a term that has no standard definition in scanning electron microscopy, although we all know that it has something to do with seeing finer detail. What is not stressed enough is that it is dependent on many factors that include the sample, the S/N, and the contrast between features. In the context of the present work, resolution improvement is achieved through restoration, that is, utilizing knowledge of how the image was formed, specifically the effects of the point spread function (PSF) and noise.
It is important to distinguish between restoration and image enhancement [Reference Gonzalez2]. Restoration specifically refers to the determination of the original state of an object, which in this case is an accurate rendering of the details of that object from a blurred or noisy image. Enhancement refers to the modification of an image to obtain useful information or a more aesthetically pleasing image. An example would be contrast stretching to accentuate details of similar contrast. Generally, enhancement does not utilize any information about how the image was formed and may accentuate details such that their relative intensities are no longer accurate. As will be shown, implementation here is through deconvolution and regularization, concepts that are not associated with image enhancement.
Image calculations
Let us first define pixel size at the sample, d pix , as the size of the square at the sample that is stepped across the sample to form the digital image. What is perceived as a sharp image is one where non-noise-related abrupt changes in intensity can be visually detected by the human eye. If the probe size, d p , is larger than the pixel size, then as the beam is advanced from pixel to pixel, oversampling will occur and contrast between adjacent pixels will be reduced because of partial resampling. Furthermore, if the excitation volume is larger than the probe size, then the maximum magnification for a sharp image will be reduced. Microscopists generally turn to Schottky or cold field emission (FEG) sources to obtain sharp images at more than 150,000×. Again, depending on the sample type, a variety of factors must be optimized to achieve the highest level of performance, and these conditions may be different from sample to sample. Often the highest possible resolution achievable for one sample may not be possible for a different sample.
Observed SEM images are stored as a matrix of numbers related to the measured signal intensity, I o (i,j), at points with coordinates i,j detected sequentially as the probe is scanned point to point on the sample. In the situation where excitation volume and surface morphology effects are small, the problem can be visualized as shown in Figure 1. If d p ≤d pix , ideally, the signal measured, I o (i,j) would only be that from the pixel on which the beam is located. However, if d p >d pix , then the signal measured will be the sum of the signals generated from all the pixels sampled. Therefore:
where the dimension of the beam in pixels is d p =2s+1 , and s is the number of pixels in the beam on either side of the central pixel (assuming it is chosen to be large enough to give s>0 although the beam need not be symmetric). The actual distribution of electrons in the probe is described by the PSF, psf(k,l) in Equation (1). It is a measure of how the probe current, i p , is distributed in space and is generally expressed such that the integrated value of the function is unity. The term I t (i,j) is the true intensity that would be emitted from a given pixel if d p ≤d pix that is s=0, when the signal only comes from the specific pixel addressed. Equation (1) indicates that while the beam resides at position (i,j) only a single measurement I o (i,j) is made, there are more unknowns (I t (i,j)) than knowns, and the true image cannot be calculated. Assume, for example, as shown in Figure 1, the beam occupies a matrix of 3 × 3 pixels (s=1). In that case there would be a single measurement and nine values of I t (i,j) to determine including the one on which the beam is centered. As the beam is stepped a single pixel to the right (increasing i), another measurement is made, with three new pixels (unknowns) added and three pixels dropped. Now there are two measurements and twelve unknowns. In normal image collection the beam is also stepped in the j direction, and the ratio of unknowns to knowns approaches one for small PSFs and large images at which point the set of equations generated could be uniquely solved. However, in practice the number of measurements will never equal the number of unknowns, but as the number of pixels in an image gets larger, some form of pixel padding around the periphery of the image makes a reasonable approximation for the values of I t (i,j) possible.
Equation (2) can also be described as the convolution of the PSF with I t (i,j), but for a proper determination of I t (i,j) the noise, η(i,j), must be added:
It is common practice to rewrite image matrices in what is known as column vector format and Equation (2) becomes:
where I o , I t and η are column vectors representing the observed image, true image, and noise, respectively, and A is a block circulant matrix that can be developed from the point PSF. In the absence of noise it would be relatively straightforward to calculate I t from I o if A is known or to calculate A (and therefore the PSF) if I t and I o are known. However, when noise is considered, the problem is significantly more complicated. While the approach needed may not be familiar to many scanning electron microscopists, equations similar to Equation (3) may be found in many books and articles on the general topic of image processing [Reference Gonzalez2–Reference Castleman4]. What is important to recognize is that the determination of I t is a very challenging problem even if I o is carefully measured and the PSF is known. The reason is related to the uncertainty in η resulting in what is termed an “ill posed” inverse problem that requires some form of functional minimization as well as some form of practical constraint, often referred to a regularization term. This leads to a re-statement of Equation (3) in various forms such as:
The two terms within the absolute brackets refer to a least squares minimization of the difference between the observed image and the estimated best fit image and a smoothing term to account for the noise, respectively. The parameter λ is the regularization parameter and D is a derivative matrix. Although the solution of Equation (4) for a large image is computationally complex, it can be readily handled by a multicore workstation equipped with optional graphics processing units (GPUs), if needed. It should be pointed out that while the problem can be approached by the solution of a series of linear equations in real space, there also exist Fourier space methods such as the Weiner deconvolution method and nonlinear approaches such as Richardson-Lucy [Reference Gonzalez2–Reference Castleman4], as well as other forms.
How it works
The outline in Figure 2 is the procedure implemented in the Nanojehm Aura Workstation. It can be applied to any microscope including those with thermionic, Schottky, or FEG sources. The images collected can be 8-bit or 16-bit TIFF or PNG format to minimize any loss of information. It is also important to examine an image histogram to ensure that no clipping occurs on either end of the image grayscale and that the gamma setting on the detector electronics is set equal to one, which makes the detector output signal linear with respect to its input.
Step 1. Reference standard image
Take an image of a calibration standard at a preset step size (in nm per pixel) and at the selected beam voltage. The standard consists of spherical gold or other particles dispersed on a very thin carbon film on a TEM grid as shown in Figure 3. The particles used are nearly spherical and selected from a reasonably tight distribution, but subject to some variation as a result of their synthesis. Their specific size, shape, and distribution were determined by transmission electron microscopy. The image size of the reference image should be selected such that dozens of particles are included in the field. The particles in the reference image are then used to form a single composite stacked particle after the elimination of any overlapping or out-of-specification particles. The stacked image has a number of advantages relative to a single particle image: (1) It represents an average of many particles that may have slight differences in size and shape even after meeting certain criteria set by the software to avoid overlapping or otherwise unacceptable particles. (2) More particles means better counting statistics. And (3) the beam never has to dwell for a long time on any one particle, which could lead to problems width drift or contamination. The stacked particle image is then compared to that of a theoretically calculated image of a particle of the same size to determine the PSF. Nanojehm provides a standard block with different separate particle size standards. For example, 19 nm particles are used for determining PSFs at higher magnifications, while 52 nm are used for determining the PSF where lower magnifications are used as in the case of larger-beam thermionic instruments.
Step 2. Images of the sample
Take one or more images of either different samples or different regions of the same sample under identical microscope conditions as those used for collecting the reference standard image. Microscope conditions refers to settings that might perturb the PSF determined in the previous step such as astigmatism adjustment, working distance changes (> 2 mm), and kV changes.
Step 3
Load images into the Nanojehm’s Aura workstation.
Step 4. Point spread function determination
A high-resolution image of a reference particle is theoretically calculated, which can then be directly compared to the measured stacked single particle image (Figure 3). These two images can then be used to compute the PSF by a variant of Equation 3. It should be remembered that a PSF refers to the distribution of electrons in space and therefore can be used to restore either BSE or SE images. Figure 4 shows an example of a PSF determined from the particle image shown in Figure 3.
Step 5. Resolution improvement
Once the PSF is determined, it can be used to restore images through the use of Equation 4. Figure 5a is an image of a commonly used gold on carbon standard (Pela 617) obtained at 20 kV. The image was taken at a high magnification with an LaB6 source instrument using 1 nm per pixel. Since the beam diameter was considerably larger that the pixel size larger than the pixel size (Figure 1), imaging conditions imaging conditions were in the oversampling mode (d p >d pix ) making this image a good candidate for restoration by Equation (4) with the results shown in Figure 5b. While the image does appear clearer, a less subjective indicator of improved resolution is the line profile of an abrupt discontinuity in the structure at a gold-carbon-gold boundary. Here the steeper slope of the line scan in the restored image (Figure 5c) is an indicator of higher resolution.
Results
Figure 6 shows some graphite flakes in the SE mode. It was taken with an LaB6-source instrument at 10 kV and a probe current of about 60 pA. The image has minimal noise, and a visual comparison of the observed image (Figure 6a) with the restored image (Figure 6b) demonstrates increased sharpness. Figure 7 shows a series of 19 nm gold nanoparticles also imaged with an LaB6 instrument at 20 kV, but at nearly 9 times higher magnification than Figure 6. Closely spaced particles are more clearly separated in the restored image (Figure 7b). Figure 8 is an image of a focused ion beam (FIB) prepared thin section of a 22 nm node device. The section was mounted on a thin carbon film on a copper grid, and the structure was so fine that low voltage was preferred, in this case 1.5 kV, to minimize penetration and limit the appearance of overlapping structures. No beam deceleration was used in order to avoid distortion of the beam because of a non-uniform electric field above the specimen. This led to a relatively noisy image in which the FIB preparation markings indicated by the red arrows were obscured. Those markings as well as other fine details are visible in the noise-reduced restored image (Figure 8b).
Discussion
As stated previously the method presented here refers to restoration, which is distinct from image enhancement. The work described here is only a beginning of what in the future may be of great importance to other forms of microscopy including STEM and ion beam microscopy. In fact, it is already being used in confocal optical microscopy where point spread functions are being determined with the help of fine fluorescent beads dispersed in a sample [Reference Eberle5]. Looking ahead, significant opportunities also exist in the formation of three-dimensional microscopic images. Here, data must be collected more rapidly, as in the case for the elucidation of neuronal interconnects and other cellular connections in biological studies. If large numbers of microtome samples are collected, images or image-related data must be stitched together as is the case with multibeam SEMs [Reference Murphy and Davidson6]. Fortunately, work of the type described here provides several opportunities for more rapid image collection. The first is image denoising associated with the regularization process itself, which means more information can be extracted from noisier images than is currently attained. The second is the possibility of resolution improvement where large beam sizes and currents might be used for better S/N, and deconvolution while deconvolution can lead to better resolution. Finally, in the case of multibeam systems, not all beams in an array may have the same PSF. In such cases when defocusing or astigmatism is present in some beams, they can be effectively compensated for to give optimal performance for all of the beams.
Much of the research work currently underway involves situations where the probe diameter is larger than the depth of penetration of the electron beam in the sample or where the signal actually measured originates from a depth less than the diameter of the electron beam. The latter is often the case in SE images. Although it is well recognized that while SE-1 electrons come from a small feature of interest, SE-2 electrons can come from a larger area of the surface causing diminished contrast. The situation is much more limiting, however, for BSE images and X-ray maps of thick samples where spatial resolution can be severely limited. Dropping the beam energy in both cases can lead to higher-resolution images but with a loss of signal intensity. In the case of X-ray emission, decreased beam energy will only excite low-energy lines that have a tendency to be overlapped with lines from other elements. There is an opportunity to apply restoration technology to these more complicated problems, but this will require the development of models for three-dimensional PSFs. Such PSFs will be sample-dependent and will vary from one material to another. Addressing this challenge will require both improved models and the ready availability of much higher-speed computers. The microscopy field has seen this before. It was not that long ago that Monte Carlo calculations were restricted to large centralized computers; now these be done on any laptop computer or by cloud-based computing.
The ultimate limiting factor in image formation is the point-by-point S/N. More signal is better, but it is not always possible. Samples drift, contaminate, and can be modified during long exposure times. Also there are practical limits on how much time can be allotted to collect needed images, which in the case of three-dimensional datasets could be days, weeks, or longer. Many scanning electron microscopists are well aware of the threshold equation that defines how long data must be collected for a given image size, probe current, and detectable contrast level above the noise [Reference Goldstein7]. A similar concept could be applied to determining under what conditions a fine detail might be discernable above the noise. This is part of our ongoing research.
Finally, since the procedure described provides a unique way to characterize electron beam size and shape through PSF determination, it can be a very effective way to monitor instrument performance over an extended period of time for maintenance purposes. Furthermore, it can be used to ensure proper correction for astigmatism and focus, with the potential for better automation of those functions.
Conclusions
This work demonstrates the effectiveness of software image correction to improve image quality and resolution. The procedure described has proven successful for many instrument types, operating conditions, samples, and signals. It also provides insight into beam shape and size characterization. Furthermore, significant noise reduction is possible, which can translate into increased productivity by obtaining a given level of image quality in a shorter time.
Acknowledgements
The authors wish to thank the National Science Foundation for its support through SBIR Phase I grant 1519678 and Nanojehm for its continued financial support. The authors also wish to thank Professor Richard Hailstone at the Rochester Institute of Technology for his advice and technical support.