Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-23T23:14:00.507Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  03 January 2012

Thomas E. Phillips
Affiliation:
University of Missouri

Abstract

Selected postings to the Microscopy Listserver from September 1, 2011 to October 31, 2011. Complete listings and subscription information can be obtained at http://www.microscopy.com. Postings may have been edited to conserve space or for clarity.

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2012

Specimen Preparation:

minimizing carbon deposition rates

I will have an experiment that will require relatively low carbon deposition rates, and I am expecting I'll need to impose “draconian” limitations on the microscope for a few days before doing this experiment. This is in a tungsten LV-SEM with a fairly large sample chamber. It is pumped by a scroll pump in LVac, and a scroll behind a turbo in HVac. We are already using gloves and leaving it in low-vac N 2overnight, etc., but I was wondering if you folks had any simple actions you have found successful in reducing the C contamination within your chambers over and above the usual vacuum maintenance activities, for example, not using carbon paint to mounts your samples, etc. Zack GainsforthThu Oct 27

When I bought my Cameca SX100 about 6 years ago, the dollar had fallen against the Euro and I could no longer afford a turbo pumped system. But Cameca and I discussed a cryo baffle trap for the diffusion pump and they eventually supplied a “Cryo-Tiger” or “Aqua-Trap” system that runs at 100 K and was designed to pump water in a vacuum. Of course at that temperature it pumps hydrocarbons even better. The compressor is air cooled and I just leave it on all the time, and although I was just expecting it to help with backstreaming from the diffusion pump, I am totally impressed with the degree that this cryo system not only eliminates backstreaming from the diffusion pump, but even stabilizes the carbon contamination signal on my instrument. Under normal conditions, I cannot detect an increase in carbon K α for many minutes and even then, it is around 1 or 2 sigma. I really like it and highly recommend it for others that don't have an ultra-high vacuum or “dry” pump system. John Donovan Thu Oct 27

Carbon itself does not evaporate or outgas in vacuum at room temperature, so it would be safe to use water-based Aquadag type of carbon paint for sample mounting. What you want to avoid is anything and everything of organic or polymer nature that still can fly, float, or even slowly creep over the surface. Meaning—nothing sticky (no carbon pads, double-sided or any other sticky tapes, etc. . . .), no glues (other than water-based Aquadag, TorrSeal, Hyson-1C, and Ted Pella's high-conductivity colloidal silver) no plastics (other than PEEK, Teflon, or Kapton/polyimide). Last but probably the most important—get the Evactron and run it for a couple of hours before loading your sample and (if the sample allows it) then for 15 minutes with the sample already in the chamber. If your sample can't stand oxygen plasma at all and therefore is not compatible with Evactron cleaning, then immediately before loading it to SEM you can treat the sample in UV/Ozone cleaner for couple of hours. Make sure that SEM stage is/was lubricated only by something with critical vapor pressure in E-9 range (TorrLube, Y-25, or similar), otherwise outgassing of carbon-containing molecules from the stage lubrication will make all the precautions fairly irrelevant. Valery Ray Thu Oct 27

Just what I was looking for! I will check into these points and let you know what I find. I find it interesting you think carbon paint is OK, but tape isn't. I'll do an experiment and see if it is in fact cleaner in my system. Zack Gainsforth Thu Oct 27

You are welcome. The difference between water-based Aquadag and carbon tape comes from the fact that tape contains carbon and the glue. Carbon itself is a solid and its critical vapor pressure at room temperature should be quite low. I do not remember exact value of CVP of C, and am out of my office to look it up at this moment, but I do know C must be heated to 2500 K to get it sublimate in vacuum. This tells me that C should not be volatile at 273 K. Solid carbon therefore has no way to migrate onto your sample and show up in the experiment, even if it is present in the chamber. That is why water-based Aquagad paint should be safe—it contains only carbon powder and DI water. Glue however is a very different story. Almost any glue will contain some un-polymerized organic molecules, which will outgas (even if that is with extremely low partial pressures) and make a way to the area where electron beam hits the sample. Most organic molecules would be broken by electron beam radiation and will deposit carbon in some quantities. Valery Ray Thu Oct 27

In the recent thread on minimizing carbon contamination in the SEM, a commenter cited a group of water-based compounds as being good to use. They were: water-based Aquadag, TorrSeal, Hyson-1C, and Ted Pella's high-conductivity colloidal silver. I tried one of them and the stuff was really syrupy and set quickly so that my small and fussy samples were all but impossible to mount in it. I wonder if the original poster (Valery Ray) or anyone might point me to something like this but that is not too viscous and that sets slowly (actually I'd settle for either). I'd love to get away from sticky tape. Tobias Baskin Thu Oct 27

Just for clarity: Aquadag is water-based and fairly conductive. If it is too syrupy then it can be diluted with DI water to any consistency. If after dilution the dried layer is too thin, then you can re-coat after drying. In diluted condition it handles just like paint. “Conductive Liquid Silver Paint” or “Colloidal Silver Liquid” from Ted Pella is solvent-based and very conductive, also has almost no binder. It can be gently baked in a vacuum oven (if the sample allows) and has almost no outgassing after the bake. If it is too syrupy then it can be diluted to any consistency by the “extender” which is also sold by Ted Pella. Hysol-1C and TorrSeal (which are in reality the same exact thing, TorrSeal is a re-branded Hysol-1C) are AFAIK all-solid epoxies and thus viscous. They are also not conductive. There is another glue that I worked with, which has quite low outgassing—it is sold by Allied High Tech as “Epoxy Bond 110”. Very liquid after mixing and would not thicken at all until heated. The catch is that it requires heat for curing and is not conductive. I would also be very interested to learn if there are other no-outgassing or low-outgassing glues out there. Valery Ray Fri Oct 28

Another low vapor-pressure conductive adhesive is Epoxy Technology (Epo-tek) H20E. It is a silver filled heat curing epoxy. If your sample can tolerate being heated to 80°C then its great, albeit somewhat expensive. It is a little viscous as it contains a fair amount of silver. We use it consistently in 10−9 Torr at room temperature and also at 10−11 at cryogenic temperatures. I have no connection with Epo-tek just a satisfied customer. Lyle Gordon Fri Oct 28

Image Processing:

digital standards

Does anyone know if there is a “standard” for digital imaging, especially for microscopy related digital imaging? If so, who is in charge of making such a standard? Zhaojie ZhangThu Oct 13

I don't think there is a standard as to what file format to use or what image processing software to use, but there are ethical guidelines as to what can be done and what should not be done, and how the image manipulations should be documented. One such article you can find here: http://swehsc.pharmacy.arizona.edu/exppath/micro/digimage_ethics.php. MSA also has a statement on ethical image processing: http://www.microscopy.org/resources/digital_imaging.cfm. The gist of both is that you need to stay as true as possible to the original image and pixel data (no compression), and that anything that could lead to artifacts needs to be documented, and, of course, that whatever you do to the image is documented in such a fashion that it can be repeated (as all scientific data should). Mike Bode Thu Oct 13

Do not use a “lossy” compression, e.g., JPEG. Use TIFF to save your images. Journals will publish guidelines in the “Instructions to Authors” section as to what, if any, modifications may be allowed. Always save the original, unaltered image file and a backup. Geoff McAuliffe Fri Oct 14

Answers to many questions can be found here http://www.theiai.org/guidelines/swgit/. Most of the definitions and procedures concern legal requirements and are applicable to good laboratory practices. Rich Brown Fri Oct 14

I beg to differ-JPEG is nearly indistinguishable from TIFF for photographic reproduction. That is one reason why all of the digital cameras in the world default to JPEG, even the expensive digital SLR's. The only place in a micrograph where you can actually discern compression artifacts is around micron markers or other text, and even that is only visible by zooming in on the region. As soon as you insert that image into PowerPoint for a report or presentation, it is automatically compressed anyway. However, I do agree that one should archive the raw original DM or whatever format the camera produces. John Mardinly Mon Oct 17

You are missing an important point. By human eye you may not be able distinguish a JPEG from a TIFF. However our images are scientific data, and we end up doing quantitative analysis on this data. JPEG formatting changes the data. What is even worse if you take a JPEG image and store it a second time changes are made to the last set of changes. This corrupts the integrity of the scientific data. Do a simple test. Take an image store it in TIFF. Then store it as JPEG, then open the JPEG and compare the results pixel by pixel. You no longer have the same information. If you then store the JPEG again the compression routine reformat the data yet again. You should always, repeat always, store your original data in a lossless format. RAW and TIFF are two of the examples. You can just JPEG to post on the WWW or print posters, but never when any quantitative work is being done from that data set. Nestor J. Zaluzec Mon Oct 17

There is another important distinction between jpegs and tiffs, apart from the loss of data incurred via jpeg compression. A tiff file retains the information for each color (Red, Green, and Blue for RGB; Cyan, Magenta, Yellow and Black for CMYK) in a separate channel, allowing colors to be balanced and adjusted relative to each other. A jpeg mixes them into a single channel (which is why it's one third or one quarter the size of a tiff), which means they can no longer be treated individually. Nowadays a good jpeg will be accepted by many publishers, particularly for online publication, but for color-critical work most still insist on tiffs. Paul Callomon Mon Oct 17

Further questions: 1. Remember in the “old” days, that all films have a speed, ISO 100, 200, etc. Nowadays, do we just use the “gain” to control the speed and “grain size” on the digital camera? Is there a “standard”? 2. Quantitative analysis is becoming a norm of digital imaging. Does it always require 16 bit images, rather 8 bit? Did anyone compare the same sample using 8 bit or 16 bit, and see a significant difference? Zhaojie Zhang Mon Oct 17

While I agree that JPEG is not a suitable format to store quantitative data, there is one little issue about TIF files I'd like to raise: TIFFs are actually very flexible containers for all kinds of image data with differences in channels (RGB, CMYK, Alpha), pixel depth (4, 8, 16, 32,…bit), layers, etc…. and also compression. There are TIFFs out there that have no compression, others with lossless compression and even some with lossy compression, similar to JPEGs. Hence, it is important to know how to save a TIF for later analysis…if in doubt, uncompressed or LZW. Guenter Resch Mon Oct 17

I don't think that you and Nestor (or me) are in disagreement. As soon as you print an image it doesn't really matter if you use JPEG or TIFF. The printing process will likely create more artifacts than a reasonable JPEG compression. However, once you have acquired an image and stored it with a lossy format, it is no longer amenable to many image processing steps that might be necessary to extract the information that is needed. And that may include steps that you didn't think of when you saved the image initially. Look at it this way: Would you store the quantitative results of any experiment in a way that ensures that the data degrades? Of course not. An image should be treated the same way. As soon as you acquire it, it should be saved in way that does not degrade the data. Since any lossy compression does lead to deterioration, it should not be used. Of course, making a copy and save it as a jpeg for printing is probably OK. I don't think that anybody would scan such an image from a magazine and then assume that they have recovered the original data. There was a very intense discussion about this a couple of years back with regard to 21 cfr rule 11, where the FDA tried to implement a way to make sure that data, including images, are available at a later stage to verify results. I am sure this will come up again at some point. They also came to the conclusion that the original data must be saved in a way that can be recovered. A lossy compression will not allow that. And why take the risk? We now have TB sized hard disks that can store 10s of thousands of images at very low prices. Mike Bode Mon Oct 17

Image Processing:

particle size

I have a question about image processing. A very common task for a microscopist is to measure the size of particles in a TEM image. More generally, one needs to get the size distribution. If the particles are round, we measure the diameter of each particle. If the particles are elongated, we treat each particle as an ellipse (ImageJ can do that). The question is: how we can define the term “size” in general? When we have a mixture of elongated particles, triangular particles, complex shapes and others, it is rather difficult to explain what is “size”. Could you please comment on this question or recommend some books/articles on the topic? If we treat an elongated particle as an ellipse, how do we estimate the error? Dmitry BagrovSat Sep 24

Dmitry brings up a good point. Those of us in the lab involved in particle characterization by image analysis and other techniques often note the problem where our clients want a single number to describe a complex distribution of particles which vary in projected area and shape. We call it “mono-numerosis.” Even for something as simple as spheroidal particles, we will typically measure at least 1000 single particles (see below for some comments about agglomerate rejection) and plot the distribution of equivalent circular diameters. We typically compare these measured distributions to model normal or lognormal distributions. The broader the distribution, the more particles one needs to measure to get precise parameters for the distribution. See Masuda, and Inoye, (1971) J. Chem Eng. Jpn 4(1):6066. We have looked at edge rounding of silver halide grains by TEM. We have a model that fits the projected particle boundary to a model of a super-sphere. Anytime one fits a measured boundary to a model shape, one should always examine the residuals from the fit and account for the large discrepancies. We have also fit particle boundaries to a model eclipse. Again, we look at the fit residuals. We have also looked at particles of varying shapes (including triangles) Computing Fourier Shape Descriptors helps here. See Bowman, E. T. et al. (2001) Geotechnique 51(6):545554. The general problem comes down to measuring a “feature vector” for each “blob” the image analysis algorithm detects. One then needs to do some classification analysis to sort out errors and then classify the “single particles” by size and shape. For us, the first step is usually rejection of agglomerates. Most single particles are “convex”—meaning no re-entrant segments. We typically compute the “maximum intrusion distance” by finding the closest point on the convex hull for each point on the “blob” boundary and then finding the point on the boundary with the largest distance to the convex hull. This is the maximum intrusion distance. This provides very reliable agglomerate detection. Once we have rejected agglomerates, we look at the rest of the items in the “feature vector” measured for each blob. These quantities might be the equivalent circular diameter, some generic shape factors like the circularity, perhaps the ratio of Feret diameters, and some Fourier Shape Descriptors or some Gray Level Moments (see Hu, (1962) IRE Trans Info Theo 8:179187 and Belksim, et al. (1991) Pattern Recognition 24(12):1117–38 and Gonzalez, and , Woods (1993) Digital Image Processing 514518). With a new problem, we typically make some training sets of particles that we think represent the different classes and look at the differences between and within the classes for elements in the feature vector to come up with some proposed classifiers. This generally involves a bit of work looking at scatter plots of proposed classifiers for the different training set classes. I should note that we use the AnalySIS Five image processing software from Olympus-SIS. I have no financial interest in this other than being a satisfied customer. We have done a lot of custom programming to get the feature measures we need. Often measurements are computationally-intensive and once we have verified that a prototype Imaging-C module works, we will move the computationally-intensive functions into a DLL that has been compiled with an optimizing compiler and use the functions in the wrapper Imaging-C modules. John Minter Sat Sep 24

This question (how to characterize particle size) is, at one level, no different to any other exercise in statistical summary: how to give as much information as possible with just one number (statistically, the first moment, or mean), or with just two numbers (first and second moments—mean and standard deviation), or with just three numbers (the first three moments, or mean, standard deviation and kurtosis), etc. I appreciate that in saying this I am merely restating one of the many salient points made by John Minter but I think it is worth re-stating, as being arguably the last common ancestor before it is necessary to branch into domain-specific answers. JM gives a summary of one domain, to which it may be useful to add a summary for sedimentary petrography and optical microscopy. Particle shape is most often of interest to petrographers not for the particles themselves but for understanding the void space between. Therefore, packing is usually the most important single parameter (first moment), for which size is little more than a proxy. Hence the equivalents to JM's single parameters are measures such as points on the scales of Krumbein, Rittenhouse, Harrel, Powers, or Pilkey. These were developed when analysis was purely visual, with no computational aids. Measuring angles and lengths was a time-consuming process. Does that mean they are redundant now that we can quickly click on many points on an image of representative grains and hence calculate any number of moments of the particle size distribution? Not if we return to Dmitry's question: how to summarize the important characteristics of size and shape in a heterogeneous sample, using just one or two numbers. These comparator charts were developed by Krumbein and others not only because it was not easy to enumerate a representative sub-population of particles, or calculate their distribution, but also because they are good ways of describing the distribution: in effect, they are domain-specific first moments. So, in summary, I believe that the answer to Dmitry's question is domain-specific, wherein again I am only repeating JM's contribution. But the next step is to ask the question: to what extent does this 2D slice through a 3D medium capture the information the end-user is seeking? Staying with sedimentary petrography, a key piece of information is pore connectivity, and hence the shape and size of pore throats. What can a 2D slice tell us about this? Robert Ehrlich has spent a large part of his life studying this and has provided an extensive literature that can be readily searched, and which it would be presumptuous of me to even try to summarize, but I believe it is an interesting question whether or not one is a petrographer. Finally, one of the main reasons I subscribe to this newsgroup is to read the comments of workers in other application domains. Sometimes, what is routine in one area can be a new insight in another, so thanks to all those, like JM below, who take the time to answer questions. Barrie Wells Sun Sep 25

Many thanks to everyone who answered my question; I highly appreciate your help! Although the procedure of particle size measurement seems simple, it is very difficult to measure the sizes accurately. Since the task is extremely common, many of us face this problem of image processing/metrology. As for me, I come to understanding that I always have to keep in mind the accuracy of measurement that I need. E.g., if the desired accuracy is not too high, I can treat a hexagonal particle as an ellipse without loss of reliability. Dmitry Bagrov Mon Sep 26

You could certainly do that, but I don't understand why. You would make an a priori assumption about the shape of the particle. The best way to do this would be to use a calibrated image, then use a thresholding technique and let the software give you the accurate area or circumference or a diameter (max, min, average) of the particle. If that is not possible (for example if the contrast between particle and matrix is too low), you could try image enhancement techniques, or techniques developed for grain boundary analysis, which use more information than just intensity values. As has been pointed out by John and Barrie, “size” is not a very precise parameter. It could mean “length” for one person, “area” for another. I think once you have defined exactly what you want to measure; you will be able to devise measurement techniques that give you better results. Keep in mind also that you are typically only looking at a 2D representation of a 3D object, and your measurements could be biased. Mike Bode Mon Sep 26

Instrumentation:

uninterruptible power supply

We have an FEI Quanta 250 SEM. We frequently have power surges in the building and are looking to get an uninterruptible power supply (UPS) for it. We are looking at a Toshiba 1600XP series (3.6 kVa). The sales rep is asking if we want it hard wired into the electrical or plugged into the wall; based on your experience, is one better than the other? Is the Toshiba a good model or are there others out there that are better? Michelle ShaferMon Sep 26

We have an FEI dual-beam FIB and use a Toshiba 1600EP UPS system (it has 12 battery packs)—installed in 2006. Among all our instruments the FIB seems to survive the power outages better than most, so I'd say yes it is a good model. Ours is hard-wired and met local ordinances, just now I am replacing the batteries (after five years)—each battery pack is quite heavy and contains six 12 volt lead-acid batteries. The batteries are about $20 each. Robert Keyse Mon Sep 26

Based on redundancy considerations, I would have the UPS plugged into the wall and plug the SEM into the UPS by the same plug. If the UPS fails to the point that you can't use it in bypass mode, or if the UPS should be removed for repairs, you will have possibility to re-plug the SEM directly into the wall and have it back operational relatively quickly; in case of hard-wired installation you would need licensed electrician to temporarily bypass the UPS. Use overrated twist-lock plugs for reliability and if Safety will permit then lock them mechanically to prevent accidental disconnection. Valery Ray Mon Sep 26

TEM:

calculating liquid nitrogen cost

When ordering or pricing supplies how does your lab factor in the costs for acquiring, obtaining and moving tanks of LN 2when calculating a per liter cost to users? Alice ResslerThu Sep 15

We used to charge out to our users for LN2 usage and determined that if we charged for less than 10 liters (up to 10 liters) we lost money. Funny enough people stopped asking for LN2. We average a certain amount per day for our instruments and spread that cost over the range of instrument and our users do not see that amount on their bill. Some days we lose a little money and others we make money. We got out of the nickel and dime charging a few years ago and it really streamlined our accounting. Garnet Martens Mon Sep 19

TEM:

contaminating samples

I recently had someone put a sample in our TEM that gummed up the holder and contaminated the objective aperture. Another person lost their support film and it dropped into the lenses necessitating another service call. Two other people have asked me to allow them to put in questionable samples. We do not do any thin sectioning here so it is all particulate samples. When I train individuals I ask what their samples will be but many times, after they are trained, these same people take on samples from colleagues and it is those samples that I worry about. What do some of the rest of you do in situations like this? I am running a multi-user facility and people run their own samples after they are trained. Do any of you have a banned list of substances that you do not allow in your instruments? Norm OlsonThu Sep 8

You cannot monitor each person and their samples. You just have to try to teach them what is an appropriate and then trust them to act accordingly. It sounds like the one person used a support grid with too much sample on it, thus getting residue on the holder. Likely this sample “boiled in the beam and that was what contaminated your aperture. I tell them that, first of all, you should not be able to see the sample on the grid. If you do than it is probably too thick. Secondly, the grid must be dry before inserting, and third, if any sign of sample instability is seen than they must immediately remove it from the microscope. This usually means there is residual material that is not stable when exposed to the heat and energy of the electron beam. The other person had a poorly prepared sample as well. Most likely it was not firmly in the holder and thus fell off the holder. I would think any support film that was not well adhered to the grid would normally come off during sample preparation. This should certainly not happen with any sample and is not a problem with sectioned material. The sections must adhere well to the support grid or they would not withstand subsequent staining. It is pretty easy to hold a grid up to light and see if the surface is reflective due to the presence of a film. If some squares are dark and others light than you know the film is not covering the entire grid. The instruments are there to be used so I would not like to indiscriminately ban samples. However, users should be encouraged to talk to you before they put questionable samples in the scope so that you can both brain-storm about possible problems and ways to solve them. Sometimes this can just be sandwiching the sample by putting an extra layer on top of the sample…either carbon or Formvar film will work. If there is a way to mess up a scope a student will find it. That is just one of the many reasons for training users thoroughly on not just the microscope but also sample preparation and then all you can do is hope they absorb the message. Debby Sherman Thu Sep 8

We had one group of dental materials students at Michigan that were looking at human teeth. They did not attach the teeth properly, and they fell off the holders. Of course, they never admitted that to anyone, and they had so many teeth, they just kept putting more samples in until they got the photos they wanted. The next time I opened the specimen chamber, I found it full of human teeth. It did not take long to figure out where they came from. John Mardinly Thu Sep 8

EDS:

detector cool-down time

We have a traditional Si-Li EDS detector on or CM200 microscope. We have just noticed that when we add liquid nitrogen the crystal takes almost two days to cool down. We observe half a million counts just after we add liquid N 2with the detector retracted and it takes almost two days to come down to around 10. What could be the possible reason for this? I think it usually it takes only about two hours for the crystal to cool down. I don't think there is any ice in the Dewar. Any ideas? Ram ChandraTue Sep 13

It sounds like you are letting your detector come to room temperature between uses. It is more likely to develop ice-buildups in this scenario. I would verify that you really do not have ice inside the Dewar since that is the most likely cause of your extended cooldown time. Another possibility is that the cooling-connection (contact) between the LN2 and the detector has become loose. John Bozzola Tue Sep 13

It sounds like you are applying voltage to the detector as soon as you have introduced the LN2. I don't know that is a good idea. I was of the impression that the Li can quickly diffuse out of the crystal if voltage is applied while the crystal is warm. I think you would want to let the crystal cool for some time before applying power. Once the Li is gone, the crystal would need to be replaced. I would like someone else with more experience to comment on that. We keep our detectors cold 24-7. You don't mention what brand of system you have. We have an older Oxford ISIS. Its normal behavior is to register hundreds if not thousands of counts when the software is first started as the various parameters are optimized internally. After about 5 minutes, the count had stabilized around 200–300 cps with the beam off. I would not expect it to get down to 10 cps because there is always a strobe peak present for reference. Warren Straszheim Tue Sep 13

If Warren is correct and you've been turning on the bias before at least a 2-hour cooldown (my more conservative customers prefer overnight), you probably need a detector rebuild because the Li has been pulled out of the Si and also, the optically coupled FET may be dead. It is also cooled. Ken Converse Tue Sep 13

Sounds like one of two things: loss of Dewar vacuum or perforated window. Due to inexperience, I am unaware of the other “ten” alternatives. Fred Monson Tue Sep 13

I agree completely about it being better to keep the LN2 in there all the time. Warming up can degrade the internal vacuum, especially if the detector vacuum system is not well designed. I wrecked a detector once by warming it up over Christmas, after being assured by the manufacturer that is would be OK. On re-cooling (reapplying the bias after a day of re-cooling), the resolution had degraded appreciably. I rechecked with the manufacturer who then said that was to be expected! I can't post that manufacturer's name here, but it is available on request. I would advise against buying a detector from that manufacturer. Ritchie Sims Tue Sep 13

I was just thinking that another side effect of warming and cooling the detector might be that the solid rod that runs down the length of the snout (for cooling purposes) will expand and contract with each cycle. Does this expansion and contraction loosen the thermal connection between the rod and chip, or could it push the crystal up against the window? Justin A. Kraft Tue Sep 13

References

Masuda, and Inoye, (1971) J. Chem Eng. Jpn 4(1):6066CrossRefGoogle Scholar
Bowman, E. T. et al. (2001) Geotechnique 51(6):545554CrossRefGoogle Scholar
Hu, (1962) IRE Trans Info Theo 8:179187Google Scholar
Belksim, et al. (1991) Pattern Recognition 24(12):1117–38CrossRefGoogle Scholar
Gonzalez, and , Woods (1993) Digital Image Processing 514518)Google Scholar