Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-27T01:37:01.875Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  02 July 2015

Abstract

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2015 

Edited by Thomas E. Phillips

University of Missouri

Selected postings from the Microscopy Listserver from March 1, 2015 to April 30, 2015. Complete listings and subscription information can be obtained at http://www.microscopy.com. Postings may have been edited to conserve space or for clarity.

Specimen Preparation:

critical point drying of hydrogel

I have a thin membrane that contains a hydrogel that has pores estimated to be 100-200 nm. I would like to visualize the pore structure on SEM and I heard that critical point drying is a suitable method for preparing my sample. I wondered if my sample prep can be as easy as exchanging the water in my hydrogel with ethanol, placing the sample directly with ethanol in the critical point dryer, and dry the sample. If I want a cross-sectional view of the membrane, is it sufficient to cut the membrane while it is soaked in ethanol? Or are there other precautions I should take into account? Also, is there any recommendation for how much gold I can sputter on the dried sample? 50 nm of gold would already influence my pore size significantly and I don’t know if gold gets deposited uniformly everywhere in the gel. Daan Witters [email protected] Tue Apr 28

Others can probably comment better than I can about the entire plan, but there are a couple items I would address immediately. First, I would be careful about cutting the membrane to reveal a cross section. I have clients cut samples to see the structure of a 300 nm layer on the surface. Even when using a fresh razor blade, they are surprised at the amount of damage left behind. It is ugly at 15k×. Your situation could be the same. However, if you are not looking at the surface but at a rather thick core area, simple cutting might work for you. We used to apply 15 nm of gold with our old coater. That is terribly thick by today’s standards. We now often coat with as little as 2 nm of iridium. I would worry if you would still have charging due to the porous nature of your material. You probably want small volumes with something nearby to conduct away the charge. You might also want to look into variable pressure SEM to help mitigate charging. Warren Straszheim [email protected] Tue Apr 28

You might have to resort to cryo-SEM of uncoated, properly frozen samples, if you really want to get the real answer. Tricky, though, for the unexperienced. You need to have the right cryo-SEM and experience. Reinhard Rachel [email protected] Tue Apr 28

CryoSEM is a good idea. With 200-300 nm pore size, you should be able to freeze the hydrogel onto a stub with little or no ice crystal formation within the membrane itself, then place on cryostage in the SEM and carefully etch away the ice. We’ve done this quite a bit with plant material, and plant cell walls are essentially hydrogels though with much smaller pores. You’d need to be able to do well-controlled coating in the cryo-transfer unit, which the modern instruments should be capable of. Soaking the hydrogel in something conductive before freezing would help too and may allow you to image the membrane uncoated. Rosemary White [email protected] Tue Apr 28

The question really is: do you need an analysis of the structure of the pore or do you just need to determine their size accurately? If you “only” need the size, I would recommend other techniques than EM, because hydration is probably critical as it was already mentioned. Perhaps you could consider for example atomic force microscopy (AFM), which can be performed in liquid? Stephane Nizets [email protected] Wed Apr 29

Specimen Preparation:

osmium problem

I just had a big problem with the osmium fixative I used (2% in 0.1 M cacodylate buffer) turning a purplish black after 3 hours of fixation. The tissue was slimy and basically ruined. Any ideas of what could have gone wrong? The osmium was perfectly clear and slightly yellow as always when I made the solution. I also used the same type vials and buffer that I used before. What kind of solutions could cause this reaction? JoAnn Buchanan [email protected] Wed Apr 29

The most common causes of osmium turning color are because there is still glutaraldehyde in the sample or at the top of the tube. It mixes with the osmium and oxidation occurs. I buffer wash 3 times 20 minutes and make sure that I rinse the whole vial and inside of the cap then I wipe the top of the vial to dry it. The extended wash time is used to stop “peppering” of mitochondria and other dense structures. The osmium fixative is in cacodylate buffer, as you use, but at 1%. Several microscopists that I know do use 2% osmium. I have never gone much over an hour and do not understand the need for 3 hours as you mention. I had learned that osmium penetrates into a sample about 0.5 mm in an hour then it starts to block itself from going deeper into a tissue sample. That is why we keep the size of all samples to 1 mm cubic or smaller. If a larger sample is cut in two after an hour one can see that it is white in the center and hence the osmium did not reach that point to do its work as a fixative. I had a sample back in 2007 or 2008 that had a huge amount of fat in it. I saw that the osmium did start to turn within the hour. I assume that the glutaraldehyde was not washed out completely, for back then I buffer washed only a half hour. Pat Connelly [email protected] Wed Apr 29

Specimen Preparation:

hair samples

We are trying to preserve newborn hair samples by HPF/AFS for TEM analysis. We are having the problem that we cannot section the hair samples because the sample keeps popping out of the Epon blocks. I would appreciate learning how you did your infiltration, what resin you used and if you have any tricks to overcoming the problem of small samples popping out of blocks. Erin Tranfield [email protected] Thu Mar 12

Hair and other keratin fibers are not easy tissues and must be treated differently than normal tissue samples. However, what method you need depends on if you are looking at the hair above the skin only, or if you are also looking at the hair follicle. For the moment I’ll assume you are interested in only the hair above the skin. Hair is already fixed by nature. It is also very dense. The water content is very low, and the cells are dead. It is the opposite of normal living cells in terms of the problems for TEM preparation. It can be very easy to work with also depending on what features you want to see. The easiest method is to wash the hair to remove external lipids and dirt, place the hairs across a small frame made of plastic, or thread hairs through a narrow plastic tube, embed in LR-White (epoxy is ok too), trim, section with a diamond knife to about 100 nm/gold sections and section stain with uranyl acetate and lead citrate (slightly extended stain times compared to normal) and you can see most features. The resin will not penetrate the hair. But the hair will sit inside the resin. There are often problems with folding (you can reduce this with thicker sections) and sometimes problems at the edges of the fibers where the fiber has swollen with the water in the knife boat while the resin has not. If you want to see the intermediate filaments that make up most of the cortex of the hair, you have to do something more complicated involving repeated treatments of reduction to open up disulfides to attach stain to and use osmium. Or there is also a silver nitrate method that allows you to see the filaments, but at the expense of seeing various other structures. I’ll send a separate email to you with a paper that colleagues and I put together with all these methods. Harland, D. P., Vernon, J. A., Walls, R. J., & Woods, J. L. (2011). Transmission electron microscopy staining methods for the cortex of human hair: a modified osmium method and comparison with other stains. Journal of Microscopy, 243(2), 184-196. doi: 10.1111/j.1365-2818.2011.03493.x Kind regards Duane Harland [email protected] Fri Mar 13

Specimen Preparation:

removing Kapton tape

We have a very valuable sample that was wrapped in Kapton tape for analysis by synchrotron and micro-CT. The problem now is that we want to remove the tape and sticky residue to prepare the sample for FIB and TEM. Does anyone have any recipes (chemical or otherwise) for getting the polymers off cleanly? We could bake/burn the sample (it’s refractory ceramic) but we don’t want to do this unless necessary. Chad Parish [email protected] Fri Mar 27

I had good luck removing Kapton tape residue from thin (~250 µm) semiconductor samples by soaking in warm (~40°C, covered beaker under fume hood) acetone overnight and then gently rubbing with a Q-tip soaked in acetone on a flat piece of Teflon plastic. Valery Ray [email protected] Fri Mar 27

Specimen Preparation:

yeast

I always have problems embedding/infiltrating yeast! I have tried different resins, vacuum steps, etc. Does anyone have an embedding protocol/resin that works? Sue Van Horn [email protected] Fri Apr 17

There are many resources in the web concerning the tricky task to properly embed yeast cells for ultrathin sectioning. My favorite ones, as of today: Mary’s Manual (Boulder, CO, USA): http://bio3d.colorado.edu/docs/mmanual.pdf, Giddings, T. H., Jr., O’Toole, E. T., Morphew, M., Mastronarde, D. N., McIntosh, J. R., and Winey, M. (2001). “Using rapid freeze and freeze-substitution for the preparation of yeast cells for electron microscopy and three-dimensional analysis,” Methods Cell Biol. 67, 27–42 (yes, Mary is one of the co-authors). Also, Kent L McDonald, “Out with the old and in with the new: rapid specimen preparation procedures for electron microscopy of sectioned biological material,” Protoplasma (2014) 251:429–448 DOI 10.1007/s00709-013-0575-y. This article is quite helpful as it shows that some of the paradigms of “old” embedding protocols are clearly outdated, if not to say “wrong”. It also depends on your equipment, the specific question, and so on. Reinhard Rachel [email protected] Fri Apr 17

Not sure what type of yeast you work with. We work with budding yeast and have been had good luck with microwave radiation, especially for stationary yeast, of which the cell wall is very tough. http://www.ncbi.nlm.nih.gov/pubmed/17156022. For log-phase cells, extended infiltration (at least one overnight) works fine using Spurr. I believe the low viscosity of Spurr helps. Best luck and let me know if you need a reprint. Zhaojie Zhang [email protected] Fri Apr 17X-ray Microanalysis

:

NIST DTSA-II Iona released

NIST DTSA-II has recently been updated to version Iona. Download for free from http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html . DTSA-II provides a host of tools for quantitative EDS microanalysis including quantification, simulation and measurement planning. Iona has a host of improvements both large and small which are detailed in the release notes on the web site. Further details on quantitative analysis with NIST DTSA-II are available in Newbury & Ritchie’s J. Mat Sci. article “Performing elemental microanalysis with high accuracy and high precision by scanning electron microscopy/silicon drift detector energy-dispersive X-ray spectrometry (SEM/SDD-EDS)” (free for download from http://link.springer.com/article/10.1007/s10853-014-8685-2 ). This article demonstrates the potential of the modern EDS detector to perform reliable, quantitatively accurate compositional measurements even for some very challenging samples. Nicholas Ritchie [email protected] Tue Apr 21

Image Processing:

exporting spectrum image slices

I’m trying to export data from a 2D EELS spectrum image in Digital Micrograph and I’m not sure what the best approach is. Here is my problem: 1. I have a 2D EELS map/SI of a thin film interface, where x is some width parallel to the interface, y is some width perpendicular to the interface, and z is the EELS energy range (400 700 eV). 2. I would like to integrate all the spectra in plane (x-direction) to improve signal-to-noise (This would essentially leave me with an EELS line scan parallel to y). 3. I would then like to export slices at specified integration windows normal to the plane (y-direction) to text files. The only way I can currently do this is by drawing an ROI onto the SI, which generates an individual spectrum integrated across x. I then have to export this and drag the ROI, repeating ad nauseam until the entire y length of the scan is traversed. Is there a simpler and faster way to do this? Please let me know if you need more clarification. Steven R. Spurgeon [email protected] Mon Mar 9

If you use MATLAB you can import DM3 files. Here’s what might be a useful link (I have not used this particular one but it looks simple): http://www.mathworks.com/matlabcentral/fileexchange/29351-dm3-import-for-gat an-digital-micrograph ImageJ will also open.DM3 files directly; I do all my analyses in that package. There are plugins that can be used for this as well, that you can try on spectrum images. Once you are in one of those programs you can easily write scripts to do any arbitrary operation on the data. Larry Scipioni [email protected] Tue Mar 10

I would suggest that you use a multivariate statistical analysis (MSA) approach such as using AXSIA (Automated eXpert Spectrum Image Analysis) software or the MSA plug-in for DM that does Principal Component Analysis. The AXSIA software uses Matlab and the SIMSIMAN module that comes with the AXSIA software would allow you to extract your line profile easily. It would also be available in Matlab for any manipulation or export that you need. The MSA plug-in would allow you to reconstruct your data to improve your signal to noise for the profile. In both cases, you must be careful to align your EELS spectra in energy throughout the spectrum image using either the zero loss peak or a peak that is in both phases and does not have a chemical shift. You must also take care of eliminating X-ray peaks in your EELS data; otherwise they are identified as unique phases. If you do this, you minimize the number of factors (components) identified. Masashi Watanabe and Paul Kotula gave excellent tutorial talks at M&M’13 on the topic. Both are available online for viewing. You might have to contact John Mansfield for the link because I don’t have it available as I write this. The advantage of the MSA approach is that your analysis gives you an image and so any inhomogeneities across your interface in terms of the concentration profile would easily be identified. It is also a totally unbiased analysis approach. Masashi is the author of the MSA plugin for DM and it is available through HREM Research. Paul is a co-author of the AXSIA software and a co-patent holder for it as well. I would highly recommend that you look up their publications on the topic, as they are very good reference articles to have. We just started using the AXSIA technique after having Paul Kotula give us a tutorial at the Army Research Laboratory and are finding it a very powerful. It’s a bit more trickier with EELS that with XEDS, though. Scott Walck [email protected] Tue Mar 10

Digital Micrograph has a host of tools for dealing with 3-D data sets. There is a menu labeled “Volume”. This will allow you to rotate or project the data along any direction needed. For your application, you would want to project the data along the “y”. You will now have a 2D dataset with the projected intensity along the interface in the X-Dimension and Energy in the Y-Dimension. To save this as a series of files, you would then use the “File: Save As Series...” menu item. You can choose EMSA format for the file type and the EELS header information and calibrations will be preserved. You can also use the “Text” format, and then you only get the intensities. You can write a simple script in Digital Micrograph to do this. Below is an example. It took about 4× longer to document that actually write. For more information about scripting, there is a good reference section in the Digital Micrograph help file. You can also get a lot of information at the DM Script site at TUGraz http://portal.tugraz.at/portal/page/portal/felmi/DM-Script. Also, a simple script to project a spectrum image into a line scan is available from Ray Tweston. Ray D. Twesten [email protected] Tue Mar 10

Image Processing:

Bruker Esprit 1.9 offline software

We just installed Bruker Esprit 1.9 offline software. Some problems happened. The first one is when I was trying to do QMap for some existing ChemiSTEM HyperMap data, all the QMap images are just black but not with different colors as usual. The second one is when I wanted to make a new QMap method, an error happened like “cliff-lorimer: wrong primary energy in standard library”. So, I guess there may be some parameters need to be changed? I change the “Voltage range” to 200 KV in the “Microscope information” already. Qiang Wang [email protected] Sun Mar 29

Generally when you receive the message that you have a “wrong primary energy in standard library” it means that your spectra or map was recorded at a different energy than that used to build the Esprit library. Go to the menu “Database”, then top right open the button “Standard Library” and select “new”. Accept to change the current library and fill the data for the new one: -any name you like, - elevation is the take-off angle, probably 18° or 22° for Titan or Osiris respectively, - azimuth the angle between the goniometer axis and the diode positions 45° - sample tilt that you used to record the data. Philippe Buffat [email protected] Mon Mar 30

Instrumentation:

high-resolution sputter coater

We have 3 FEI FEG SEMs in our building: a new FEI FIB, a 4-year-old FEI nanoSEM, and a 15-year-old FEI XL30 SFEG. We also have 2 older sputter coaters. A Polaron E5100 (1988) and a Denton Desk II. Both use mechanical pumps. Our Polaron is no longer working. The problem of course is the visible grain at high mags (>50K×) with gold, gold/palladium, etc. It doesn’t matter if it’s the Polaron or the Denton. Same result: visible grain over 50K×. I need to justify with written rationale to our directors why we should consider buying a high-res coater. Simply stating “to complement our 3 FEG SEMs” is not enough of a reason to spend the money. They prefer I fix the Polaron. Any ideas/feedback from those who have already crossed this bridge would be helpful. Also, does anyone have a resource for parts for Polaron coaters circa 1988 models? Fred Hayes [email protected] Tue Mar 31

At the risk of sounding too salsesman-y, I’d like to offer my thoughts on your situation. First, congrats on having 3 FEGs at your facility! That’s always a good problem to have! A couple things to consider: As for the coaters, I would start with the following approach. First, figure out the size of the features you are looking for. You should have an idea of the grain size put down by each of your coaters. If the coating thickness or grain size exceeds the size of the features you are looking for, you will never see them. I would try stating it in such a way that you may be missing important information or obtaining inaccurate information from your samples because it is highly possible small features have been completely obscured by the thickness and or grain size of the metal coating you are currently using. I would recommend either an osmium coater or a high vacuum iridium or platinum coater, as you will be able to lay down much thinner coatings with grain sizes that may not even be visible. If you are working at very low accelerating voltages, know that almost all samples, regardless of how carefully they were prepared or stored, build up a thin layer of hydrocarbon material on the surface. When working below 2 kV, this contamination contributes significantly to the image formed. Ideally it should be removed with a UV cleaning cycle (although a very low power plasma clean may work on some materials). Doing so may give a much better, and more accurate surface image, and may eliminate the need for a metal coating in some instances. In the end, I think a valid way to frame your request is by stating that you want to use the equipment and tools that will get you the most accurate data from your samples, and not leave your results open to questioning or second-guessing. If it results in saving time (samples come out right the first time) or money (new systems come with warranties, less prone to breakdowns) that might also help. Jeffrey Hall [email protected] Wed Apr 1

Instrumentation:

carbon coater problem

Our lab had an EMS 450 carbon coater sitting on the counter top. I located the roughing pump in storage and am trying to make the system operational. All electrical operation seems to be working. I dumped out the old pump oil and replaced with new oil. Upon first pumping down the system, the vacuum gauge leveled at 5×10− 1mbar. I worked with the vacuum pump connections and was able to obtain 2×10 1mbar. I ordered new seals for the jar that forms the vacuum chamber [the old seals are at least 10yrs old]. The new seals just came in, and there was no improvement to the level of vacuum. One of the observations that I have made is that I obtain the best vacuum when I turn on the system in the morning. If I leave the system running, the vacuum level will steadily worsen, holding finally at 5×10− 1mbar. Once I have cycled the system, I can never reach that level again unless I wait until the next day. Any suggestions from the community? Do I have a pump problem or a vacuum gauge problem? Dan Fairweather [email protected] Wed Apr 1

Plenty of variables to consider, but perhaps most straight-forward is that if the seals in your roughing pump are as old as the jar seals and it was sitting around with used oil in it, you probably should consider a pump rebuild since there’s no telling how bad things might be inside the pump. You might also check out the literature on your pump if any is available to see what vacuum levels it’s capable of when working perfectly so that you can determine a real target vacuum to aim for (not knowing the details of the pump, for all we know you’re actually within spec, poor as it might be). John Papalia [email protected] Thu Apr 2

Dan’s right - There are a lot of variables to consider. I’ll add a few more. The fact that your vacuum worsens as time elapses makes me wonder if it is backstreaming oil into your carbon coater. A quick check of the vacuum line should let you know. If it is, that’s the first thing to take care of before you contaminate the whole system with oil. Assuming there is no backstreaming occurring, if you have access to a vacuum meter, you might want to attach it directly to the pump and see what kind of vacuum the pump is pulling on its own (no coater, no vacuum tubing). That should tell you which side the problem is on. If you don’t have a vacuum meter, try to find a second pump to try out on the system to confirm the vacuum you can pull. If it’s the pump, a rebuild or a new pump is probably the best option. My personal experience with rebuilds has been about 50/50, for what it’s worth. If the pump seems fine and the problem seems to be on the coater side, I would start by removing the bell jar and plugging the vacuum inlet in the chamber with a stopper to see again which side the problem is on - the chamber itself, or the internals. From there, it becomes a matter of trying to check seals to find the leak. Jeff Hall [email protected] Thu Apr 2

Instrumentation:

software and computer upgrades

I would like to revisit the problem of old software, computers, and institutional support. We have many instruments that run on proprietary software that has not been upgraded to more modern operating systems. For example, some of our instruments use programs only compatible with Windows XP. Our IT guys want to ‘upgrade’ all campus computers to a newer operating system and don’t want any old machines running. Is it reasonable to tell them that we need our old XP (and earlier) computers to keep our instruments running? What would be a good approach to satisfy their urge to stay current and our need to live in the past? Jonathan Krupp [email protected] Mon Apr 27

I think all of us feel your pain. Having done both microscopy and systems administration I empathize on both sides of the equation. The IT staff often don’t “get” that you have a valuable piece of scientific equipment that will continue to work for many years (and which people need to use for their education or research interests), but will never again run an up-to-date operating system. Contrary to popular belief, you aren’t being a Luddite, you are stuck with something that will break if the OS is upgraded and you can’t afford to break it or replace it with a newer piece of equipment. X-from the IT perspective they are looking at a computer that will no longer receive security patches and whose antivirus support has already or is about to run out. Understandably, they want it off their network. With even flash drives being capable of transmitting viruses, you need a way for people to use the equipment, but not have a way for the computer to become infected. You might also need backup hardware that can still run Windows XP (some labs at the University here have a small stockpile of XP compatible computers as fallbacks). At minimum you should consider regularly creating disk images of the hard drive(s) to ensure that you can recreate the setup when some of the hardware fails (it’s not getting any younger). Old hardware drivers can be difficult to find, especially if they came on manufacturer’s CDs or floppies that you may or may not be able to locate in a crisis. What some labs have done is create a private network with a file server. The file server uses a current/secure OS and is where users on all the old computers store their files. The server can share the files out to the larger network via one connection, while firewalling the private network (where all the WinXP, Win2K, etc. computers live) that the server is connected to via a different connection. In other words, the file server has two network cards. Users can no longer use floppies or USB devices on the old computers; they can only physically connect to the server. Depending on how your building wiring was done, the IT folks may be able to create the private network using the building’s network switch, without the need for physical rewiring. There may be other ways, but this seems like the most viable to me. It will take some money and expertise to pull off, but the alternative is worse... Douglas Cromey [email protected] Mon Apr 27

It’s perfectly reasonable to tell the IT people you need XP or whatever to keep your instruments running. We’re in the same boat with one of our confocals. A good approach to satisfy their need to upgrade and your need to not upgrade is to offer to let them pay for upgrading the hardware (computers) and software running your instruments—or to not pay for the upgrades and leave you with your current systems on instruments. Phil Oshel [email protected] Mon Apr 27

The volume involved in consumer manufacturing makes computer equipment cheap and expendable. That does not make it a reasonable proposition to scrap specialized equipment that lacks the economies of scale. Specialized instrumentation is neither cheap nor expendable. If the IT unit has difficulty understanding this, since you are at a college, you might consider getting an Econ faculty member involved who could assign a student project to evaluate the economics of the two alternatives: 1) isolate the security risks with a closed network running older operating systems; 2) scrap and replace with new instrumentation. Part of the cost of alternative #1 is that the interface boards from the scope manufacturers may go out of production and repairs involving the interfaces may become more expensive for that reason. However, if the manufacturers know that numbers of their clients are addressing the software obsolescence problem with well-maintained older operating systems, pressure to hustle older systems off to the junk pile may diminish. The rush to scrap and replace, solely for operating system compatibility issues, fails on all sorts of sustainability and economic grounds. John Twilley [email protected] Mon Apr 27

Yes a common pain from time to time. I’ll just bullet point a couple of thoughts/discussion points (based on my lab’s experiences) 1. Dedicated control computers are part of an instrument. An upgrade of the computer part of the instrument implies IT needs to make sure that the device it attaches to still works ok—let them deal with the vendor (if a Win7, virus checker, security update friendly solution exists then great). The job of IT is, in part, to make sure you have a working system and that disruption is minimized. If the vendor tells them that their software won’t work then it’s time to discuss options. Anything you can do to massage the relationship with the IT people toward them realizing they are upgrading a microscope rather than an office PC and the realistic costs of software induced hardware failure etc., the better things will go long term. Then you can get them to suggest solutions. 2. We solved the problem of “connection to the network” and virus transfer via USB with a mixture of policy and tech solutions. For example, we have a TEM that runs Windows 2K on its control PC. The PC is connected via LAN to another PC running Windows 7 in a different room (this PC also connects the same way to some other microscopes). The PC also connects, via a separate LAN card, to the organization LAN and the internet, etc. It is fully patched and with security. To get files from the microscope users access the shared data drive on the microscope PC from the win 7 pc (which they can do while someone else is using the microscope). You then have a policy of no USB drive, etc. for the microscope PC, which is considered to be “not on the network”. Hope there is something relevant in that. Best wishes, Duane Harland [email protected] Mon Apr 27

A few simple steps may help. Step 1: install Win.7/8 PC and disconnect XP/Win2K machine from network. Arguing with IT policy does not work but taking problem off their hands may. Step 2: Let the old PC only run the equipment; do not use it for any other purposes. Step 3: The old PC should have nothing except necessary app. program and hardware drivers. Transfer/remove all acquired data from it at the end of each session, this way there is nothing to backup. Just keep couple of spare installation disks with application and drivers. Step 4: identify obsolete components i.e. motherboard with older bus type and proprietary interface card(s), etc. Buy spares while available. Or even the entire computer. Cheap for now but will become expensive or NLA just like components for old PDP-based EDS computers in the 90-s. Data transfer can be done in many ways: private network or USB drives or CDs or anything else. If XP machine becomes infected then system restore will likely fix it but if not then re-install OS and the application - easy with nothing to backup and no data loss. Besides a “lethal” malware is a rarity. My PC-related tech. support problems are almost always about gigabytes of temporary files, junk applications plus disk cleanup and defragmentation not done in years. Vitaly Feingold [email protected] Mon Apr 27

We are facing the same problem with old computers and not supported operating systems. We use the same solution, a separate local network (hosting 1× Win3.11WfW – an old EDAX DX4, 1× Win98, 2× WinXP and 2× Win7). Here I would like to comment USB drives problem. After very bad experiences about 10 years ago, no user is allowed to copy data onto his/her own USB drive directly from microscope PC. For USB drive users we use following set-up. We have installed one Linux box in our local network running Debian Wheezy and Samba server. Everyone who wants to copy data onto USB flash disk has to do it thru this Linux box and Samba shared folders on the old Win systems. Oldrich Benada [email protected] Tue Apr 28

Core Facility:

access to user facilities

I am looking for advice from the community about how access to microscope facilities is granted to users. Specifically, I am thinking of something not quite so formal as the General User Access Proposal process available at National Labs, but not as open as our existing “free-for-all” where anyone who requests training can have open access. I don’t need advice on how to train users, rather, I need a fair process on how to determine who should or should not become a user. Does anyone have a policy/process they wish to share--off line if you like--that weighs user requests? Perhaps a policy that includes various levels of access to users based on their needs and skills? And a follow-up question is the magic “silver bullet” of how to monitor users after training to ensure that they are actually doing everything the way they were taught. Roger Ristau [email protected] Fri Apr 10

We are probably a small enough lab that we have remained rather informal about access. There are a few people that I have thought about redirecting away from the microscope, as their brain just doesn’t seem to be wired for that kind of work. I don’t know if I have officially said anything out loud. Some seemed to have handed off SEM work to others in their research groups who are more adept. Upon granting sign-up privileges, I advise new users whether they are free to sign up as they please or whether I would definitely like to be nearby for their next few sessions. Our reservation system, ORS, provides the option for ones to be a resource monitor by e-mail. I set that option for myself so I get notified about every change in the schedule. Most notices can be ignored. A few users are on my watch list. The delete button is easy to use for those that don’t concern me. We average about 25 hours of use per week. There is really no reason that ones have to use the SEM outside of regular hours. A few users are granted or loaned keys for access at any time. Sometimes a last minute evening or weekend session is needed for a paper deadline, but that is unusual. We do allow users to come before we leave for the day and continue their session into the evening. I do review photos occasionally from our users. Mostly I look at the images left on the SEM user interface and look deeper if I see signs of poor operation. I plainly reference the cost of detectors during training and assure users that if they follow the standard operating procedures, they should have no problems. I also explain to them that the procedures need to be followed completely and in order. I worked hard to make the short form of the SOP short and practical. It is about 1-1/3 pages with another 2/3 of a page of common shortcut codes. And I tell them of some spectacular failures like ones that couldn’t get an image and it was because they had failed to click the “Beam On” button on the UI. I make it a point to refer users back to the written procedures. Otherwise I allow users to ask me too many questions and they never learn for themselves. I hope these ideas help. I look forward to hearing what others have to say. Warren Straszheim [email protected] Fri Apr 10

I would say that it is important to identify those candidates who indeed have sufficient amount of work to do on SEM. Those who need 2-3 “nice pictures” to get are potentially problematic operators with minor or zero motivation for learning and understanding the technique. The next issue is to provide the users for the approach or some signs of right way to analyze their samples. The idea is to assist users from the very beginning to find the right condition for imaging/microanalysis/diffraction pattern/etc. and to explain the reasons for the choice. The last important issue I would mention is interpersonal relations: its better is the user (especially the new one with no experience) will feel safe and comfortable to report about any problem to the instrument/facility supervisor. Users always make mistakes, but to repair or to come back to the source of the mistake is always easier if you get the whole story and it is done ASAP. This is possible only if user does not afraid to report about a mistake. This also ensures that user will be instructed properly how to avoid this same mistake in the future. Inna Popov [email protected] Sat Apr 11

I operate our SEM/TEM/confocal/epi/laser microdissection facility just like Warren does, so he saved me a lot of typing! It mostly works. Tina (Weatherby) Carvalho [email protected] Sun Apr 12

I’m in an unusual environment: a makerspace or hackerspace in Chicago, which to my knowledge has the most public access policy of any SEM. Any adult can join for $40/month, which gets them 24/7 access to the SEM, as well as access to the rest of the tools in the space. I can’t assume any science background with people who want to use it. I’ve put together a 3-hour PowerPoint course, which is a prerequisite. Then I schedule about 1 1/2 hours of hands on training. Then they are free to use the scope without supervision. Generally, there is some level of self-selection because there is a time commitment to go through this. Still, I find that many people don’t come back and use it. This is true of many of the tools at the space. People see a scarcity of training classes on, say, the milling machine or our large CNC router, and they sign up just in case, even if they don’t have a project that could use it. We haven’t found a good way around this problem, and with nearly 400 members, and all tool authorizations done by volunteers, it is a source of stress for the organization. I have put in place a tiered access structure for the SEM, and right now have only implemented the bottom tier. This training doesn’t permit users to change samples, use the sputter coater, critical point dryer, backscatter detector, or EDX. Users need to work with me to prep samples, so I can make sure nobody is going to put something wet or that will outgas in the chamber. Many of the users just want to see how a SEM works, so I keep interesting samples in the chamber at all times. So far, the only user breakage problem I’ve had was someone who couldn’t differentiate between first and second peak, and kept raising filament current until it blew. Not that big a deal. That’s also why I’ve been nervous about letting anyone do sample prep, risk running the sample into the BSD, or have a liquid nitrogen accident with the EDX. Also, our sputter coater is very finicky, and the Ar pressure difference between the plasma extinguishing and the power supply overloading is quite difficult to maintain with the needle valve. Additionally I may have been too hasty buying a CPD; we don’t have a fume hood, so I’m not comfortable fixing wet samples in glutaraldehyde, and we’ve done absolutely nothing with wet samples. (I also don’t have formal training myself, and I’ve been hoping to find a mentor in the Chicago area to help out.) Cheers, Ryan I’m in an unusual environment: a makerspace or hackerspace in Chicago, which to my knowledge has the most public access policy of any SEM. Any adult can join for $40/month which gets them 24×7 access to the SEM, as well as access to the rest of the tools in the space. I can’t assume any science background with people who want to use it. I’ve put together a 3-hour PowerPoint course, which is a prerequisite. Then I schedule about 1 1/2 hours of hands on training. Then they are free to use the scope without supervision. Generally, there is some level of self-selection because there is a time commitment to go through this. Still, I find that many people don’t come back and use it. This is true of many of the tools at the space. People see a scarcity of training classes on, say, the milling machine or our large CNC router, and they sign up just in case, even if they don’t have a project that could use it. We haven’t found a good way around this problem, and with nearly 400 members, and all tool authorizations done by volunteers, it is a source of stress for the organization. I have put in place a tiered access structure for the SEM, and right now have only implemented the bottom tier. This training doesn’t permit users to change samples, use the sputter coater, critical point dryer, backscatter detector, or EDX. Users need to work with me to prep samples, so I can make sure nobody is going to put something wet or that will outgas in the chamber. Many of the users just want to see how a SEM works, so I keep interesting samples in the chamber at all times. So far, the only user breakage problem I’ve had was someone who couldn’t differentiate between first and second peak, and kept raising filament current until it blew. Not that big a deal. That’s also why I’ve been nervous about letting anyone do sample prep, risk running the sample into the BSD, or have a liquid nitrogen accident with the EDX. Also, our sputter coater is very finicky, and the Ar pressure difference between the plasma extinguishing and the power supply overloading is quite difficult to maintain with the needle valve. Additionally I may have been too hasty buying a CPD; we don’t have a fume hood, so I’m not comfortable fixing wet samples in glutaraldehyde, and we’ve done absolutely nothing with wet samples. I also don’t have formal training myself, and I’ve been hoping to find a mentor in the Chicago area to help out. Ryan Pierce [email protected] Sun Apr 12

SEM:

observing ice

We are trying to observe ice in the SEM. The purpose is visualizing the transition of high-density ice (ice II or III) to low-density ice, either ice Ic or Ih using EBSD. The cryo stage in the present setup does not get lower than140°C and this is only just below the recrystallization temperature. To prevent charging, gas is admitted into the microscope. This, however, has a tremendous effect on the stage temperature which can easily goes up to110°C, way above the recrystallization temperature. As a result we have so far of course not been able to identify the high-pressure ice polymorphs. The best remedy would very probably be to improve the cold stage so we can reach lower temperatures, and for the future this may well be what will be done. For the time being I am looking for alternatives to admitting gas to prevent charging. Two ideas popped up: freezing salt water or freezing a conductive nanoparticle solution, e.g. gold or silver or as was suggested by Guenter Resch carbon rods. The rationale being that in the eutectica between the pure ice crystals a high concentration of ions or nanoparticles forms a network of conductive material that might or might not assist in reducing charge build up. Does anyone of you have experience in this area or have alternative ideas? Jan Leunissen [email protected] Mon Mar 16

In my past life I ran a FE-SEM with cryostage, and charging of ice was of little issue at 1-2 kV. Most frequently, 1 or 1.5 kV. What sort of instrument are you using? Can you get a low enough kV to reach charge balance? And adding nanoparticles, salt water, etc. I’d wonder about that. Yes, the crystallization process does exclude ions and such to produce the ice crystals (sea ice is really interesting because of this), but I doubt that process is 100% complete. I suspect it would be less complete with nanoparticles than it is with ions. Which means adding salts or nanoparticles will affect the properties you’re trying to study. Plus, the added salts/nanoparticles are going to add electrical effects to your samples, even if they are excluded from the crystals. What do these do? Phil Oshel [email protected] Tue Mar 17

I have no experience in SEM of ice, but from other posts to this list, I would try low-voltage SEM to balance electrons staying in the specimen with secondary electrons leaving the specimen. An additional comment is that trying to freeze salt water is likely to result in crystals of ice surrounded by molecules of salt, since most salts do not dissolve in ice. One exception, which I have also considered in order to increase the conductivity of ice, is NH4F, since both NH3 and HF can incorporate into the ice structure--the reference for this is a book called Physics of Ice, the name of the author of which I do not remember. Bill Tivol [email protected] Tue Mar 17