Specimen Preparation: methacrylate bubbles
I am trying to embed some small root tips in butyl/methyl metha-crylate. I am using a ratio of 4:1, and benzoin methyl ether (BME) as the UV catalyst. I polymerize at room temperature overnight in flat-bottomed polyethylene and/or polypropylene capsules with indirect illumination from a UVGL-25 hand lamp at 366nm. I keep getting air bubbles around the specimen in the finished block. I always vacuum fix these guys until there is no air was left in them (they look beautiful in LR white). I have reduced the amount of BME in the final resin from 0.5% to 0.1% to try to slow down the polymerization. I have also tried up to 6 sheets of Parafilm, but I still get bubbles. If any of you have suggestions, I would love to hear them! Andy Bowling [email protected] Fri Dec 18
Yes, bubbles can be infuriating with this resin mixture. My understanding is that the bubbles are formed during polymerization from the resin, not the sample. The good news about that is it makes it possible to troubleshoot without samples. I have solved this problem for myself and for some others by playing around with the relation between the lamp and the samples. You didn't describe the relation between the lamp and samples, or what the role of the Parafilm is. I use aluminum foil to block any rays from going directly between the lamp and the sample. I have the bulb in the bottom of the box underneath a piece of Plexiglas. I have a strip of foil running down the Plexiglas on which I stand the sample capsules. I put a tent of foil over the row of samples. This doesn't eliminate bubbles but it forces them to the center and top of the capsule, well away from the sample. Also, I conduct the whole operation at 4°C. This is beneficial because polymerization is quite exothermic and the heat load is much less at 4°C than at room temperature. Unfortunately, I don't know the physics behind these bubbles so I cannot suggest anything rational. Try different light-sample arrangements. It does not require much UV light. Tobias Baskin [email protected] Fri Dec 18
Specimen Preparation: thin section then GUS staining
Has anyone done GUS (β-glucuronidase) staining following embedding and sectioning with paraffin? I have done whole tissue staining with good luck, but need to get into maturing seeds of grain crops to see expression. I am thinking that those are thick and hard to penetrate. Michelle Jamison [email protected] Thu Jan 7
The best images I have seen in the literature of GUS-stained sections are from material embedded in Technovit. This is, I think, glycol methacrylate, which is friendly to water soluble things. It is also possible that PEG sections would work, but I have not seen images. Organic solvents tend to remove the GUS reaction product. It definitely won't work in butyl methyl methacrylate (we tried) and unlikely to work with epoxies. In these scenarios, the staining is done first and then everything is embedded and sectioned. I expect that the GUS enzyme would lose activity after fixation and embedding, even in paraffin. I don't know that for sure. And perhaps it could be tissue specific. For a tried and true method, I would look at Technovit. Ben Scheres' lab has used this successfully as have many others. Tobias Baskin [email protected] Fri Jan 8
Specimen Preparation: hydrophobic Grids
I have a collaborator here who wants to do negative staining of proteins on a lipid surface. She needs to have the lipid surface in contact with the carbon film on the grid. So this means she needs to have the carbon film be hydrophobic. I know how to make them hydrophilic by glow discharge, how do you make them hydrophobic? Any suggestions? Margaret E. Bisher [email protected] Mon Dec 14
The nature of the charge on the carbon can be controlled by the composition of the residual gas/vapor in the plasma coating unit. Harrick has some information on their website in the product descrip-tion: Surface Activation and Modification: http://www.harrick plasma.com/applications_activation.php Surface Adhesion and Wettability:http://www.harrickplasma.com/applications_ adhesion.php. Many units intended for plasma treatment, including the Harrick, have a fitting for admitting a metered amount of the chosen material. Dale Callaham [email protected] Tue Dec 15
There is an article from J. Dubochet in the book “Advances in Optical and Electron Microscopy, vol. 8” from 1982. You can find there several tips how to make carbon film hydrophilic or hydrophobic. Full citation: Dubochet, J., Groom, M. & Mueller-Neuteboom, S. The mounting of macromolecules for electron microscopy with particular reference to surface phenomena and treatment of support films by glow discharge. In: Advances in Optical and Electron Microscopy (eds. Barrer, R. & Cosslett, V.E.; Academic Press, London, New York, 1982). Oldrich Benada [email protected] Tue Dec 15
Image Processing: Digital Micrograph
We are using Digital Micrograph Version 1.81.78 on our TEM and often have an issue when converting the native .dm3 files into .tif format. The image often (not always) loses a great deal of contrast compared to the original capture and we can't seem to get good control of this with the Brightness/Contrast/Gamma controls in the DM program itself before saving the image. We can adjust the image using Levels in Photoshop, but it seems like we're dumping part of the grayscale range during the 32-bit .dm3 to 8-bit .tif conversion. Also, some clients are unhappy about having to play around with large numbers of images. They want them user ready right off the scope, reasonable or not. Does anyone else have this issue and a possible cure? Randy Tindall [email protected] Wed Dec 16
I noticed your query concerning TIF saving from DigitalMicrograph and did a quick test using GMS 1.81.78 to see if there were any obvious issues. I tried saving the display as grayscale, with the annotations burned in at the actual resolution of the DM image. I tried with the default brightness/contrast/gamma and with an adjusted one. The images I saved opened in PhotoShop Elements and looked identical to the originals displayed in DM. You mentioned that the problem did not always occur. Could you contact me directly and provide some examples where the issue has arisen to help clarify what is going on? Robin Harmon [email protected] Wed Dec 16
We have perhaps the same issue occasionally when converting DM3 to tif. We generally use batch convert, and sometimes one or more images in the batch appear solid black when opened (can see image if bright/contrast pushed toward limits). Converting the same batch a second time usually results in all images being OK. It's simple, but seems to take care of what may be random glitch in the code. Roger A. Ristau [email protected] Wed Dec 16
You can use ImageJ which reads the full 16 bit raw DM3 file format. I do this routinely and don't lose any information. You then have all the ImageJ tools to manipulate the data. ImageJ can be downloaded for free here: http://rsbweb.nih.gov/ij/. Nestor J. Zaluzec [email protected] Wed Dec 16
I have placed a screenshot of the problem I described at http://www.emc.missouri.edu/lookatthis.htm, just in case people are wondering what we're talking about. The image on the left is the original .dm3 test file as viewed in ImageJ. The image on the right is the corresponding .tif file as converted by Digital Micrograph using their Batch Convert function, also viewed in ImageJ. Similar results show up in Photoshop. The .dm3 and .tif files show up as pretty much identical IF you only view them in Digital Micrograph apparently. In other words, if you batch convert the .dm3s within DM, then open up the resulting .tifs or .jpgs, they will both look fine. Switching between imaging platforms seems to cause the problem, but hardly anybody here uses DM to do image processing on their home or office computers after they leave our lab, so they usually see the pasty files. I agree with Nestor that ImageJ works well with these files, if you open the .dm3 in ImageJ and save it as a .tif from ImageJ, which preserves the 32-bit indexed color format. In addition, as John Russ points out, the newer Photoshop versions will handle the high-bit depth images, however I'm pretty sure there is no Photoshop plugin as yet for opening .dm3 files directly. Please correct me if I'm wrong. Randy Tindall [email protected] Wed Dec 16
I don't use ImageJ very much and I definitely don't use Digital Micrograph, but I am confused. The TIF file and its histogram within ImageJ look consistent. The histogram is definitely weighted to the right side, i.e., the light grays. The displayed image is consistent with that. It has no dark grays. The DM3 image and its histogram are not consistent. The image seems to span the whole gray scale range. Some areas are nearly black and some are nearly washed out. However, the histogram shows only about one fourth of the gray levels to be occupied. The lightest pixel should be a middle gray and the darkest pixel should be a dark gray. That doesn't add up. Perhaps there is something in ImageJ to autoscale images to spread the histogram to fill the available grayscale display. That seems to have been done for the DM3 image, but not with the TIF image. Maybe it is only done with deeper images (i.e., more than 8 bits per pixel). It also looks like the converted image was prepared by scaling the grays from zero to the maximum brightness. They do not seem to start scaling from the minimum brightness. Therefore, the histogram seems to have shifted over to the right. If the original image would have had a darker pixel to begin with, the effect would have been less obvious. That seems to be the source of the problem. I wonder if this traces back to more automatic functions being used to collect the images. I am not familiar with TEMs. I know that I would forego the automatic brightness function with SEMs and set my brightness range manually to fill the available gray scale (input range to the A/D). I will also ask why so many gray levels are used on this sample? It looks like many a given phase would be spread across many gray levels even at 256 levels. The extra 24 bits of data per pixel seem to be wasted in this case. I suppose other images benefit from the greater data depth. I could also wonder aloud why indexed color was used instead of gray scale interpretation. I suppose there is a reason that may have much to do with manipulating brightness contrast and gamma without changing the actual data. I am just curious. Warren Straszheim [email protected] Wed Dec 16
Perhaps I can clarify. Both ImageJ and DigitalMicrograph perform an intensity transform to map from the raw data to the display on the screen. This consists of a survey to locate data values to be assigned to black and white and, potentially, a brightness/contrast/gamma adjustment. This why ImageJ displays the DM3 image with shades spanning the range from black to white yet its histogram indicates that the data lie in about one quarter of the histogram range. This is a common process in image display applications. The converted image looks pale because it was generated from a display with a gamma of 0.6 and this skewed the pixels in the display towards white. The histogram of this display therefore shows pixels skewed to the right! Without the gamma correction the first peak of the histogram lies at a pixel value of around 90 rather than the 150 you can see in this case. Regarding the large number of grey levels, I think the CCD detector in the camera used had a dynamic range of 16 bits. After dark subtracting and gain normalizing the data there are negative pixel values so the data do not lie within the range of a 16-bit unsigned or signed integer. We therefore chose to put it in a 32-bit signed integer. You are right that many of the 32 bits are not used but we prefer to maintain the integrity of the data rather than clip or offset it to fit in a 16-bit data type. Robin Harmon (Software Program Manager, Gatan Inc.) [email protected] Thu Dec 17
Interesting thread. Indexed color catches my attention. Many of my imaging programs don't like indexed color and render them black. The SEM and EDS images are indexed color. I convert their TIFFs to greyscale using Mode in Photoshop and move on. Perhaps ImageJ Type/8-bit would do this function? Gary Gaugler [email protected] Wed Dec 16
When working with images in DigitalMicrograph it is important to make a distinction between the raw data (the greyscale values of each pixel), and what you see displayed on the screen—the image display. The two are not generally the same. If you right click on an image in DigitalMicrograph and select Image Display, you can set how the image is displayed via the Survey method. You can also do it manually, via the histogram. DigitalMicrograph generally does a good job of displaying an image, even where the greyscales in the histogram are bunched up at one end, since it will automatically stretch the histogram in the display (depending on which survey method is selected). When you export this image to another software package, it will most likely display the image so that the full range of allowable greyscales (which depends on the bit depth the image was saved), is displayed. This may generate the washed out images you reported. If your images have been exported in 16 bit depth, then simply restretching the histogram should produce a high contrast image. If you exported in 8 bit depth then stretching the histogram may give a poor result, as there not enough greyscales in the final result to reproduce tonal variations. There are a number of options: Use Gatan's Batch Convert—this will save the image display (ie how it is set up to appear in DigitalMicrograph) to an 8 bit indexed colour image. Use my Multliple Saves as Hi-Res TIFF script. This seems to do pretty much the same as the above, though I haven't compared the two methods. Get it from: http://www.dmscripting.com/multiple_saves_as_hi-res_tiff.html. I use this for all my conversions, and haven't had any problems. If you manually adjust images immediately after acquisition, make sure you save these changes before attempting either of the above conversions. (Note the option File/Save Display As—will do the same as the above—save the front-most image as it is currently displayed, in an 8 bit format.) Choose File/Save As. If you select the TIFF option you can save your image in 16 bit format. This preserves greyscale information well, but you will be limited in the number of applications which will read the file, and you will have to play around with the levels to duplicate what was shown in the image display. Use a plugin for ImageJ and read your DM files directly—I haven't really played with this as I do everything in DigitalMicrograph. Hope this helps. Dave Mitchell [email protected] Sun Dec 20
Imaging censorship
On occasion we look at things in our scopes that have no good basis for reference—no previous publications, no other EM images to compare with—you get the picture. My method has been to take representative images of what is there, even if the images have a wide variety of things in them that don't resemble each other or what the sample supposedly “should” look like. It's that “should” that is the problem. It sometimes happens when we send these images to the clients that they grab onto whatever looks like what they want to see, pretty much ignore anything else, and then start making assertions about the images that go way beyond what the image can support and want to plug all that into a publication. (I've had people get all Eureka! about the “champagning” artifact on a negatively-stained prep, for example.) So the question is, if the EM operator has a reasonable suspicion, but not a certainty, that an image is showing artifact or something that is not really the part of the sample the researcher wants to see, how should that be handled? Should we send the images along with our caveats and risk having them having them published with interpretations that go beyond the data and may just be dead wrong? Or should we self-censor and not send these images? Remember, I'm not talking about things that we know are artifact or garbage. That's a clear call. I'm talking about imaging things that may not have been seen before, and nobody really knows what they look like (but they think they do), thereby making it difficult to separate artifact from real data. What we do now is send the images with our comments and hope that the client isn't so desperate for a publication that they ignore our cautions. We are virtually never listed as co-authors so that's not really an issue, but still…. I like clean science. How do other members of the Collective handle these cases? Randy Tindall [email protected] Fri Jan 8
For my part, I have two hats. I am a faculty researcher and a person who directs a service facility. As a faculty member I am a bit pushy about things I know about. I know exactly what you are saying about people latching on to something they like and ignoring the rest. I push rather hard for a more neutral reading of the data. When we have done enough work to know what is going on, then we publish. I have been in situations where others disagree with me so much that I just talk myself off of the project. Not common, but it has happened. I consider these occasions as personal failures. As the director of a facility, I regularly am helping people with things about which I know nothing. I have a very different approach here. I will only comment on what I know, the imaging system and know artifacts thereof. When I know the cell biology, I help with that also. Once I have explained what I see in the sample, I let the others do what they will. I am uncomfortable doing more that stating my reservations. I have been known to repeat experiments to do the controls that I thought a researcher should have done. If I can replicate the result of interest in a negative control, then I get more pushy about my thoughts. David Elliott [email protected] Fri Jan 8
I am in the same situation as Dr. Elliott. I work as a faculty at the microbiology and cell science department as well as a director of an EM lab at the core facility. Basically, I do not stick only to electron microscopy for deriving conclusions. I reevaluate what we have found from electron microscopy by means of light microscopy, cell fractionation, or mutant characterization. For service projects, I give my clients my comments and ask what other evidence they have to support their claim. For service projects in which we do not participate as a coauthor, it is their responsibility if clients go against opinions from an expert like you. I keep records of discussion (usually by e-mail) in case they blame me. Hope this helps. Byung-Ho Kang [email protected] Fri Jan 8
I think we always need to discuss potential artifacts with our users and suggest controls where these are feasible. This is particularly true for assisted projects, where users are coming to us both for our equipment and our expertise. Having done that, I am not interested in getting into a protracted battle if the user still wants to proceed with publication (and assuming I am not a coauthor). Where we sometimes run into problems is when the contact is a student who may not have the expertise or inclination to convey these reservations to the advisor (who will be a coauthor and is usually footing the bill). In such cases I think it's prudent to include any comments and caveats in an e-mail copied to the advisor. Marie E. Cantino [email protected] Fri Jan 8
First, the clients are going to over-interpret anyway, no matter how well known or unknown the samples are, no matter what you say. Sometimes one “gets it”, but … And “clean science”? What's that? Especially in biology. It's all a mess. Remember the main corollary to Heisenberg's Uncertainty Principle: in any experiment, regardless of the results, you can never know what really happened. This goes for imaging things in the microscope, too. E.g., the size of mammalian red blood cells depends on how they were prepared and what microscopy was used to image them (the literature search is left as an exercise for the reader). So how big is an RBC? It depends. Before descending completely into gloom and despair, though, remember we're *always* making judgments about how things should (or do) look. So, if the specimens are new, we look more and use different methods to look at them. If they are still biconcave discs in light microscopy, blood smears, DIC/phase, AFM, SEM, etc., RBCs probably really are biconcave discs. The problem isn't so much “how to interpet this new thing?” as “how to interpret this new thing that I've only looked at one way?” So, I'd send the images with the best interpretation and all the caveats, foremost of which is “this needs more study and I suggest these different ways of preparing and imaging” with the default opinion that the whatzit is an artifact. “Null hypothesis” if that reads better. But I definitely would not self-censor. First, it's not really your (our) data, it's the client's, and second, they may have literature that refers to the whatzit or know someone you don't that has seen the thing. If the client is so desperate for publications that they ignore cautions and caveats, then they're going to publish garbage even if you only give them good, clean images. Just make sure you're not a co-author, and maybe request that you're not in the acknowledgements. Philip Oshel [email protected] Fri Jan 8
One thing I've started to do is make notes directly on certain images—the ones that I know will get someone all excited over nothing (e.g., champagning in negative stains, a recent cell culture I was given that had massive Mycoplasma infection, etc.). I provide the original digital images, then a copy of the suspect image with a text layer pointing to the “problem,” stating what it is & why it isn't Nobel prize-worthy. Somehow seeing that info right on the image gets the message across to some people when a plain old text document accompanying the data disc doesn't make a dent. Tamara Howard [email protected] Fri Jan 8
So what is our responsibility as scientists when someone, in our opinion, crosses the line from an oddball interpretation of the data to an erroneous and/or fraudulent interpretation? A previous facility manager here confided in me that they no longer did service work for a particular faculty member because of the way the faculty member had, in the manager's opinion, twisted the data. Unfortunately the manager, who was not a faculty member, did not feel as if there was an option for calling this behavior into question. Whistle blowing can and/has historically left the whistle blower scarred or unemployed. For what is worth, our campus now has an anonymous phone number for reporting financial and/or research fraud, but I have my doubts about how well known it is on campus. I've written guidelines about digital image manipulation ethics, but as others have pointed out that it's easy to be outside of your area of technical expertise when doing service work. Supposedly, peer-review of publications should weed out wacky interpretations, but we all know of odd research findings that have been superseded by better research, or have seen the Journals have to retract a paper because questions were raised about the data (and further review by an embarrassed senior author who was not able to locate the original data). A recent paper I read studied citations of the articles that were involved in Office of Research Integrity findings. These were cases where falsification/fabrication/plagiarism in the articles was established. Of the articles written by others citing these “bad” papers, only 5% of the citations referenced the fraudulent articles in a negative light, the rest were considered positive. The blame goes a lot of places, but don't some of us have a responsibility to try to reduce the amount of “chaff” in the research literature? I'm not expecting a definitive answer, just tossing this out there as a rhetorical question. I once spent a good deal of time explaining to a student that the colocalization they were seeing in confocal images (red image staining + green image staining = yellow pixels in the overlay image, for you TEM folks) was an artifact. The student, who was not a microscopist, could never really explain the technical aspects of the problem to the PI and ultimately I had to write a short essay (with illustrations) to explain the physics to the PI. The lab really wanted the two items they were immuno-staining to colocalize, but the confocal (configured correctly) didn't show that. Douglas W. Cromey [email protected] Fri Jan 8
I am opposed to self-censorship which is saying, in my opinion, “I know more about the project than the investigator therefore” … By all means do include caveats about sampling error, artifacts, etc. but if someone chooses to “over interpret” that is their problem. Geoff McAuliffe [email protected] Fri Jan 8
I have had similar concerns throughout my research career. They go back to the early days of Photoshop and the investigator who wanted to alter intensities of bands in gels because he knew those extra bands were just “mistakes.” We provide written reports with all service projects that contain all sample prep info, summary of results, our observations, and any explanations, suggestions, or concerns we have. These are given to the students but also sent to the PI. Our responsibility ends there unless we are asked for further input. The reports are for our internal records as well as to help the researchers. I also refuse co-authorships unless I have the opportunity to edit the manuscript and agree with the conclusions. Debby Sherman [email protected] Fri Jan 8