Authorship
Confocal Listserver
I am working on a paper with some collaborators. They are doing cell tracking analysis from image data that were already published. What is the accepted standard in the field for authorship? The theory paper relies on these experimental data to validate their model. It would not be possible without the data. I think the person who prepared the samples and collected the data should be an author. What do others think? We are going to be trying to publish more and more of our data and I want to be sure to get this right. With community-driven open science, it is so important to make sure proper credit is given. I'm open to different opinions though, as I can see this issue from many sides. Is anything published on this or are there international recommendations? I looked at the RMS guidelines, but it doesn't really fit with their examples. Sincerely, Claire Brown, Dr. [email protected]
I think it depends on how they obtained the image data. If the data are already published and were taken from a repository, I do not think that the original author should be on the new paper. The paper where the data have been published should be cited though. It is a bit like producing sequencing data that are then reused. If this is not the case, then how did they obtain the data? Sylvie Le Guyader [email protected]
I think that if the data are online and has a DOI, then it is required to reference that. Doug Cromey [email protected]
I am in full agreement with you. However, this is not how the world works, especially when the data are generated by core personnel and paid for as a service. This is regardless of any institutional policies on authorship. Some labs are highly resistant to including core personnel as authors whereas others embrace giving credit where credit is due. Institutional policy here is that all work that comes through the core must be acknowledged. Some of it with specific grant numbers. There is further official guidance on who should and should not be authors, but it appears this guidance is not particularly heeded. If it were, facility staff would often be middle authors as we help plan experiments, execute experiments, and analyze data, often with creative solutions or custom code. When we point out that the novel protein the lab is interested in is more highly expressed at a specific point of cell cycle, and they didn't notice it before, and then it is figure 2C, we should be an author. Some labs will offer it and some labs won't give it even if you point it out and document the help. My personal view is that staff should be middle authors in many, if not most, of the papers. It's a battle to get some labs to acknowledge core facility staff at all. Some labs are extremely grateful and offer authorships for standard service. Most fall in the middle. Michael Cammer [email protected]
Sorry, I misread the question. If a published paper is being referenced and the staff are already an author, then the staff member is not an author on the new paper. “Imaging and analysis as in ref 14,” is sufficient. But if the staff did new work for the new paper, yes to authorship again. Although there is a clear exception: a methods paper. If the follow-up paper is a methods paper that uses the methods done in the core, then this should be an additional authorship for the core staff just as the other authors are taking advantage of the chance to publish again. But for research findings, this argument could go, as you say, either way. To reiterate, “Imaging and analysis as in ref 14,” may be sufficient. But if the same data analysis is published again as a specific figure, then the authorship is probably deserved again. The way you presented the situation raises another question; are the same data being published again? Michael Cammer [email protected]
Personally, I would feel uneasy being a co-author of a paper that I cannot defend. Shouldn't co-authors of a paper be able to explain to others the paper's findings and conclusion? If someone has done some parts of the paper, but may not know or understand other parts, or worse, they cannot agree with the findings/conclusion, then should one be an author of the paper? Kenneth Ho [email protected]
Isn't the basis of collaborative work that each author brings their own expertise without specifically demanding that all authors become expert in all the aspects of the paper? How about collaborative work between medical staff and research staff? How can I understand the medical part of a paper when I do not even have legal access to the data? Having been there done that, I think that only the last author is responsible for it all and the contribution/responsibility area of each author should be clearly stated/delimited. Sylvie Le Guyader [email protected]
There are some interesting answers here. I have experienced many of them from both sides. An idealized simplified scheme might go like:
1. If the data are published with DOI, etc., and you use them ‘as they come’, then just citing the data should be enough. It's the same as integrating the results of a paper in your own work.
2. If the data producers had to work with you to provide the data in a specific and useful way, then they made an intellectual contribution and should be included as authors, or at least acknowledged.
1. and 2. should be independent of the job role or rank of the data producer.
For ‘data’, also read ‘code’, the conflict for a data producer is that authorships count for way more than citations in the (broken in my view) academic credit system, so they are incentivized to not publish their data freely and rather gatekeep access to the data, with a deal: if you want to use the data you have to include me as an author. It's a kind of rent-seeking version of 1. above. For what it's worth, “Data available upon reasonable request” is usually untrue in >90% of cases. https://doi.org/10.1016/j.jclinepi.2022.05.019 Michael Doube [email protected]
I am with Sylvie on this one. In a collaboration paper you cannot expect every author to understand and be able to defend every aspect of the paper. I see it as the very point of a collaboration, that you don't have to know every little detail but work with experts on other subjects. The author contribution section should state who did what, and with that comes the respective responsibility. As for the original question by Claire, it is not clear to me how the data were obtained. If it is just downloaded from a public repository, I do not see co-authorship as a requirement. If the data were obtained privately and/or additional (unpublished) information and support that was helpful for the new study, co-authorship should be granted. Between those two poles, there is a gray zone where it could go either way (just my opinion, not some consensus). I would expect that there always will be a gray zone, no matter how detailed any rules are phrased. You also might want to check the journal guidelines. From a practical point of view, it may be easier to get data in the future from other groups if you establish a track record of inviting for co-authorship. But that is a whole different issue. Steffen Dietzel [email protected]
Thank you all for the input. I think it is good that I was a bit vague in my description. Now I can give more detail and see what people think. The data are mine. One of my PhD students spent several years collecting it. We are one of those people who said - data available upon request. I agree we should share our data but properly annotating, curating, providing metadata and so on is a huge investment of time and resources and to be honest as of now, in our broken academic system, there is little credit for doing this. So, this brings up my question. I have provided the data to my collaborator. I will be a co-author on the paper. They are doing additional analysis of the cell tracking that we did not do in the earlier work and using the data to support their model. However, I am trying to decide if my former PhD student should also be listed as an author. The fact that we get little credit for data reuse makes me say yes, but I wanted to get an idea of what is the “norm” in the field. The original paper will be cited so perhaps that is all that is required, but when I think of how much work went into collecting those data it just doesn't seem like enough. Sincerely, Claire Brown [email protected]
SEM Vibrations
Microscopy Listserver
Dear colleagues, we recently installed a new JEOL FEG-SEM. Measurements before installation showed no disturbances. Now we sometimes (not all of the time) find strong vibrations (visible as low as 10 kx!) and I just measured the frequency as 79 Hz. We are in Austria and our technician told me that the current has a frequency of 50Hz so we can exclude electromagnetic interference from an electrical device. Does anyone know what may produce a vibration of 80 Hz in a lab? Would it be completely unthinkable to expect them to come from freezers for example? Best regards, Stephane Nizet [email protected]
Were the vibrations from electrical or mechanical sources? That would merit different assessment. As I recall, if the amplitude is greater at longer working distances, that indicates an electro-magnetic source. If the magnitude does not depend on working distance, then it is likely a mechanical source. If the problem recurs frequently enough, that may not be too hard to track down. If it is infrequent, it may be harder to determine. Warren Straszheim [email protected]
Warren's mention of the vibration amplitude being a function of working distance is a good basic test. To further troubleshoot, you may also consider the magnitude of deflection possibly being a function of accelerating voltage (1 kV will be deflected far more than 30 kV if EMI is to blame). The fact that vibrations are present at 10x seems to be rather unusual. If you were to make a quick attempt to quantify the magnitude, does it scale with magnification? If your SEM is water cooled, it may be possible that you have some air trapped in the coolant loop that is intermittently causing this behavior. As for freezers, the intermittent cycling of the compressor could create an occasional disturbance, but the frequency seems suspect. Matt Schneider [email protected]
Not sure how you came up with 79hz but the electron column's resonant frequency due to vibration is typically in the single digits. If the “vibration” is sinusoidal, it is most likely an electrical or field issue (external or internal). Check the microscope power at one of the service outlets provided on the back of the console (should be 200V). JEOL microscopes have taps on internal transformers provided for adapting to varying line voltages. If the taps were not set correctly at installation, the power supplies may not be regulating properly. Bill Mushock [email protected]
I once had an electronic vibration or noise issue in a temperature reader and measured temperature at 1 data point per second using four channels. In these conditions, temperature was stable and equal in the four thermocouples, but when I increased reading rate to 10 data points per second, a sinusoidal signal expanding 4 degrees appeared. The four channels showed the same sinusoidal signal but delayed. To avoid this, I placed the thermocouple tip in a metallic wire paper bin, which acted as a Faraday cage. I never completely determined the origin, but we thought it could be related to WiFi or even to fluorescence tubes. To perform properly, I used a special module for the temperature reader that obtained 100 data points per second. When using this module at a rate of 10 data points per second everything was okay. Therefore, I believed that the interference had to do with the system internal circuits in some way. Antonio D. Molina-Garcia [email protected]
We had a problem with an old -80° freezer several rooms away. Every time the compressor came on it gave us similar wavelengths. We had an old building, so there were also issues with grounding that were suspect. John Shields [email protected]
To add to the fun: What's going on outside the lab? I've had mechanical vibrations in SEMs from street construction >100 meters from the room. Any construction going on outside that wasn't happening when you did the first vibration tests? Or other such changes? And check for freezers, etc., on floors other than the one the EM is on. Phil Oshel [email protected]
You may need a quiet day/evening in the lab area to test and isolate everything. The lab for our JEOL FEGSEM was in a new building, and the site survey had passed with no problems. But when the install took place a couple of months later a significant EM field had appeared. This was finally traced to LED light fittings in the ceiling of a lab, even though all the units were at least 20 m away. They had installed power lines to new switches that ran down a wall nearby. With switches and cables repositioned the lab was all fine again. Peter Davies [email protected]
Please be more specific. In which mode is the system operated in (SE, BSE, with or without image rotation, line times, etc.), mechanical vibrations, electrical interference? You can use special SEM calibration standards to give an indication about the magnitude. You can also use specialized HALL measurements to check for fields (but that is not mechanical vibration). If you are more specific, it will help in finding the root-cause of your problems. Gert ten Brink [email protected]
I (with various service engineers) have chased down a number of vibration sources over the decades and have experience with both JEOL and Hitachi SEMs and TEMs. Sometimes, the cause is obvious and other times, not. Finding the cause is always frustrating. I will list some ideas based on my experience as a user. Is it intermittent? If so, can you trace it back to time of day, passing traffic, elevator, or other mechanical vibration? Even though my current columns are more than 600 ft from a rail line, I must pay attention while taking photographs. Luckily, that line only has heavy rail early in the morning and very late at night. The daytime light rail does not seem to affect images, even though the amplitudes are measurable. Buildings often have a fundamental vibration frequency. Mine is about 10Hz in this facility. Even though you probably had a mechanical vibration measurement when JEOL did the site survey, conditions can change. In my last building, the power company installed a 20 kV distribution line 50 ft from my column after the survey and before installation. At the same location, I could count the axels on the trucks going by in front of the laboratory by watching the floor vibrations. I had to install an air suspension table to limit mechanical vibrations. It limited but did not prevent the vibrations. It might be useful to have JEOL re-do the vibration study.
Are there air currents in the room? I had to install special diffusers in one of my microscope rooms. Even though the signal is unlikely to be from the electrical system directly, run without the room lights on. A noisy ballast can be seen in the image. We encountered electrical line noise from both the service to the building and from other equipment in the laboratory that had noisy switching circuits. Both of those were visible in images at 10kX. The solution was a power conditioner. In this case, look at the incoming power with an oscilloscope. Look for spikes or unusual components to the waveform. Also, I have had such signals originate inside the SEM itself. I will point out that this is unusual, but I have had both of these happen. Dirty apertures can sometimes cause a noise signal that shows up in the image. Because it is from discharging, it is usually not a regular signal. It can't hurt to clean them. I have had a regular signal show up at about 10kX that eventually traced back to a bad component in the imaging circuit. In troubleshooting internal sources, does the frequency change with kV, magnification, emission current/voltage? Does the frequency or amplitude change with working distance? If so, that could indicate an environmental source. Vibrations can originate with the cooling system (if water). Did JEOL install a voltage/frequency conversion for this tool? Even though the service to your building is 50Hz, is that what is being provided to your tool? Matt Schneider mentioned some other common reasons for odd-frequency vibrations. Air bubbles can cause strange stuff and may be introduced if the coolant level gets low in the chiller. Or the coolant pump could be flaky. Anyway, some stuff to think about. Dan Crane [email protected]
If the source of the noise is mechanical vibration, it has to be pretty strong to start showing at 10Kx mag. If so, you can try tracing the source: Get a cardiologist's stethoscope (the one with a membrane), gently press it to the chamber, and listen. You will hear 80 Hz. Move the stethoscope around to the frame, pipes, hoses, compressors, etc. As you get closer to the source the sound will increase. Discounting deflated/defective air mounts and/or forgotten shipping screws or transportation locks, 80 Hz is close to the range of some cooling fans and roughing pumps. Valery Ray [email protected]
A lot of information has been provided. Here are my 2 cents. A good compilation of different sources of noise can be found at https://www.vibeng.com/blog and https://www.vibeng.com/blog/understanding-and-mitigating-the-vibration-in-your-facility. To distinguish between mechanical vibration (for example, pumps) and electromagnetic noise, a service engineer simply places a glass of water on the vacuum chamber. If mechanical, waves in the water will be seen. Your smartphone also has sensors! Try the phyphox app. Stefan Baunack [email protected]
I received a lot of replies and started to answer individually, but it will take me all day so here is a general message: Thank you to all for your very helpful tips. Just some basic information about the building since it seems to be an important point that I did not explain. We are not part of a big institutional building. This is a small one-story building without a basement or upper floors (and therefore without a lift). The building is surrounded by grass and trees and the street has no heavy traffic. I do need to verify the frequency of squirrel foraging and woodpeckers cannot be excluded. I generally work with the lights off. Air vibration can be excluded. The air is gently filtered through special ceiling hoses. Pumps and water cooling are in an adjoining room, so I suppose they are not interfering. Vibration frequency does not change with magnification. JEOL installed a transformer to adapt from Japanese to Austrian current standards. I started a logbook to understand when the vibrations happen (time of the day, regularity). Following your advice, I'll check the following:
- image rotation (not sure what to expect here)
- higher WD and lower kV
- mechanical vibration with a glass of water, a stethoscope or a smartphone (phyphox)
- if WiFi/WLAN is the source of the disturbance
- construction activities nearby (we have a train not too far away but this wouldn't create a continuous vibration over several hours)
- if the vibration frequency is reminiscent of air fans or perhaps air bubbles trapped in the coolant loop
- if the current installation is correct. This may be hard for me because I need to call an electrician and I cannot plan when the vibrations are happening
- use a power conditioner
I'll write back with more details later. Thank you again! Stephane Nizet [email protected]
Some news of the investigation by Hercule Poirot: Magnitude does increase with magnification and the vibrations stay horizontal, even if I change the scan rotation. I noticed a small increase in magnitude by increasing the working distance and a stronger increase in magnitude by decreasing kV. So, this looks like an EMI. But we checked everything around the SEM and found nothing. It is worth noting that the last time I measured the frequency I counted 40 Hz (don't know why I counted 79 Hz the first time). Let's follow the trail with these new clues. Thank you all for your help! Stephane Nizet [email protected]
Vibrations can be very vexing to deal with so let me share three experiences I have had over the years. Possibly these adventures will present some new ideas in your search. A microscope in Milwaukee had an intermittent, low frequency vibration problem and after a couple days I traced it to a very long air conditioning duct located in the attic of the hospital. The duct was acting like a low frequency organ pipe and was actually shaking the building structure.
Solution: Three or four holes were punched in the pipe to detune it, thereby altering the resonant frequency. The air conditioning flow was still acceptable, and the vibration was gone. Another interesting problem presented itself in Miami, Florida. At about the same time every day, it was impossible to do any usable microscopy because of some mysterious vibration. Why was it always at the same time? A golf course a couple blocks away turned on its sprinklers every day at the same time and their pumps vibrated the ground enough to mess up the microscope resolution. Finally, I was chasing a very low frequency vibration at the Mayo Clinic and since my vibration equipment was portable, I went outside the lab and took a tour of the beautiful campus. Fortunately, it was in the summer. Anyway, the vibration seemed to indicate some sort of rotating source. I ended up at an auxiliary power generating facility, watched my equipment for a while and decided one of the generators located inside had a bad bearing in its drive shaft. I went in and told a guy who seemed in charge that he had a bad bearing on one of his dynamos. He said, “You're right. How did you know that? We are waiting for replacement parts.” Magic, I said. Alex Greene [email protected]
Dear community. I promised to update the many of you who helped me. As we say in French, the mountain gave birth to a mouse: the problem with the vibrations didn't come back. It seems that it was a one-time event. Someone gave a hint about the necessity to have a separate grounding for the SEM and the JEOL engineer took this advice seriously and will have a look at it. Thank you again, hopefully this is the end of the story. Best regards, Stephane Nizet [email protected]
Laser Warm-up Time
Confocal Listserver
With regard to argon lasers, does laser warm-up time (to stable power output) change in a predictable way as lasers age? I am also interested in information on other types of lasers. Thanks, Ben Abrams [email protected]
Power supplies are an issue in virtually all electronics as the electrolytic capacitors and high-voltage capacitors all deteriorate over time. Heat expedites the aging process of the dielectric in the capacitors, and eventually the cap loses the ability to store charge to some extent. This, in turn, causes issues with regulation such as ripple or changes the duty cycle of the switching electronics. The net result is over time the output voltage may decrease or take longer to be reached. Heat also negatively impacts other components, causing drift from the original reference voltages and such. There are a few papers out there from IEEE, ASME, and NREL on this issue. I can't point you to something definitive. If I recall correctly, an issue I ran into with an argon source was the high voltage required to initiate lasing, but once the tube started lasing it was fine as it operated at a lower voltage. There was a diode booster network with high voltage caps to get the “starter” voltage high enough, and these caps degraded over time. I am not sure how “predictable” this is in terms of modelling, although there is good info available on the electrolytics. I have also heard that the laser tube electrodes themselves erode or become pitted over time, but I am not sure if this happens at a fast enough rate to be material. Perhaps others with more recent experience will have some thoughts. Colin Haig [email protected]
As Colin mentions, if a laser takes a long time to stabilize it is an indication of pending problems with the electronics. Beyond gas lasers, DPSS lasers often have a temperature-stabilized frequency-doubling crystal oven which can fail which is typically due to the control electronics that govern the heater. Pure diode lasers have less that can go wrong, but the current drivers can fail and a ripple which takes longer to settle, or does not settle at all, may be noted. Otherwise, a sudden increase in noise may also be due to a sudden change in environmental factors, such as a change (or failure!) in an HVAC system, ambient vibration, the floating optical table failing to float, or it is touching a non-isolated object. Craig Brideau [email protected]
New High-Pressure Freezer
Microscopy Listserver
Dear colleagues, Fortunately, we have funds to buy a new high-pressure freezer. Apparently, there are 3 options: Leica ICE, HPM Live U, and Martin's Compact 03. My plan is to try to have real-life tests with easy (yeasts) and difficult (Arabidopsis leaf) samples. I am aware that it usually takes multiple runs to finely adjust the workflow to particular needs, but I will have just one shot to use the machines. Aleksandr Mironov [email protected]
I have used the Leica PACT2 for 10 years. Based on my experience, yeast isn't an easy sample to prepare. For consistent results, I would suggest cultured cells. I consider yeast as a tough sample. Hiro Uryu [email protected]
Thank you very much for your insight! My original plan was to use my own samples like yeast and plants, freeze them, and then freeze-substitute in our lab. However, many people (and you) are saying that yeast are not easy. The main problem with yeast is not freezing (freezing should be perfect as they have dense cytoplasm), but freeze-substitution. Another candidate as an “easy sample” might be Arabidopsis seedling roots as they do not have air inside. However, I would be very grateful if you can provide advice on an “easy” sample (Drosophila larvae at some stage?) that can be frozen and substituted without problems and can be transported to another place (this limits the applicability of mammalian cultured cells). Aleksandr Mironov [email protected]
I understand that part. How about algae? Here is an example: https://www.nature.com/articles/srep14735. Fly embryos might be another source (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3143026/). Growing these cells doesn't require CO2. Alternatively, you can use lightly fixed tissue, such as brain or liver. After fixing, rinse in a buffer and slice with a vibratome in a buffer. Afterwards I usually keep them in a cryoprotectant in a -25° freezer. These samples can be shipped with an ice package. Hiro Uryu [email protected]
As an “easy” sample you might consider something not living. I have no personal experience, but it strikes me that a given material of some kind should behave reliably. Maybe related to biology such as liposomes, ice cream (?) or a polymer. Tobias Baskin [email protected]
I agree with Tobias. HPF is the best way to prepare hydrogels and they do well with HPF. Best looked at with cryoSEM. Same for ice cream, this also does well with HPF and cryoSEM. Phil Oshel [email protected]
Here's the good news about commercially available high-pressure freezers: They all work! The physics and engineering around shooting liquid nitrogen at 2000 bars onto metal discs has entered the phase of vastly diminishing returns where the minor differences in cooling rates are negligible in comparison to the properties and dimensions of the sample. All HPF systems will adequately vitrify a 100 mm thin layer of yeast paste (as noted, good luck with getting that infiltrated) and will fail at vitrifying a 200 mm thick hydrogel with high water content. The degrees of variation in between need to be addressed by changing the sample geometry or its freezing properties (fillers, cryoprotectants). I won't go into all downstream prep issues that trip users up. The conclusion is that you should test with samples that you're planning to use and think about future users and your facility: expert users (flexibility in design/sample configuration) versus novice users (one-size-fits-all-now-please-press-this-button); specialized addons needed? (CLEM, synchronized stimulation); serviceability and support; cost of purchase AND service/parts AND supplies. Happy to provide additional context offline. And a historical anecdote to illustrate the above: One of the first test samples for the Balzers HPF at ETH Zurich (Martin Mueller et al.) was apple tree leaves - because there was a tree outside, they had the right thickness, and the building was shared with Plant/Food Sciences. There were HPF planchettes with green stuff all over the lab. Chris Buser [email protected]
Dear Tobias and Philip, “Not-living” is a good suggestion, we need to freeze live matter with HPF. Alex Mironov [email protected]
I have also found bacteria to be good subjects. If you have colleagues working with C. elegans, you can use the bacterial mat they culture the worms on. This is also good for TEM thin sectioning. But: you also want to use the samples you actually want to work on, as previously suggested. Doesn't matter if anything else works if what you're working on doesn't. Phil Oshel [email protected]
That makes sense. I was thinking your question was based on evaluating the performance of the machine. In that case, comparing with a predictable (easy) example might make sense. But as others have pointed out, modern instruments are well-designed so this kind of comparison might not be necessary. In that case, you might choose the samples your users are most likely to freeze. Tobias Baskin [email protected]
Effect of Incubator on Cells for Live Microscopy
Confocal Listserver
We have acquired some copper-lined incubators. Culturing cells in these incubators has resulted in changes in maturation, morphology, and expression of primary cells compared to our old incubators. I suspect their signaling may be altered, as most changes seem functional and no changes were observed in proliferation. My manager wants to keep the incubators and use them for short-term culture, such as holding for live-cell microscopy. I am posting this on multiple platforms to see if anyone else has had this problem? Given cells will be in the incubator for hours while imaging each dish I am concerned my cells will be affected and this could cause variability in results. Thanks for any suggestions or experiences that may help me out. Jessica Anania [email protected]
Generally, copper-lined incubators are better for keeping microbial surface growth down due to the inherent nature of copper materials. However, it doesn't make them impervious to contamination. Have you done a mycoplasma test on the cells that were in the incubator? If you are not seeing obvious fungal/mold/bacterial growth but are noticing significant changes in cell behavior, mycoplasma would be the first thing I check. John Heddleston [email protected]
Thanks for the suggestions. The incubator is new, and we clean all our incubators once a month. I use primary murine bone marrow-derived cells so there is a possibility that due to their origin that they are mycoplasma positive. However, we test all our lines every month. Cells are only in culture for 6–10 days, which means every second batch is tested and I've never had a positive sample, nor had fungus or bacteria (thankfully). I did not test the last batch but that would have likely affected my comparison cells grown in our older incubators. I will check the next direct comparison to be sure. Another group has been in contact to say that they also had problems with long-term culture of primary cells. They saw addition of copper ions to the medium which blocked cell differentiation. My cells did differentiate, but not in their usual time frame and functional assays were different. Thanks. Jessica Anania [email protected]
Interesting non-confocal question, indeed! As metals are exceptionally non-volatile, one would expect much less than one atom of copper vapor in a typical incubator (less than one Earth volume in size). No doubt copper compounds will be much more volatile. Even simple copper carbonate can presumably reach 0.1 micromole in 1 cu ft and might partition preferentially into aqueous solutions. So, I'd suggest (in order of importance): 1) make sure the temperature, humidity and CO2 concentration are equivalent; 2) add minute amounts of Cu ions to the cells in the regular incubator and see if similar things happen; 3) assay for Cu in the media maintained in the Cu-lined incubator, for example, “Dibromo-PAESA” or other sensitive colorimetric or fluorometric assay. Zdenek Svindrych [email protected]
I have checked the temperature, humidity and CO2 concentration and they are functioning normally (and the same as the other incubator). I haven't done suggestions 2 or 3. Jessica Anania [email protected]
When something like this happened in my old labs, we'd first check whether anyone used volatile organics or bleach to clean/sterilize the incubator. Cells, and especially stem-like cells, react to low but long-term exposure to that stuff. Tim Feinstein [email protected]
Long-Term Mounting of Zebrafish
Confocal Listserver
Colleagues, I would like to mount some fixed zebrafish embryos for long-term storage/imaging. Long term being “carry around in a slide box in a backpack for a few years (like the Molecular Probes fibroblast or pollen grain slides)”. I need to prepare these for deep tissue 3D imaging at a microscopy course I run (QFM), and while I can easily mount in agarose or glycerol with nail polish in a hanging drop slide or glass bottom dish, I am looking for a permanent embedment so people who have the slides can use them in demos or as examples after the course. So wise colleagues, what is the solution (pun intended)? Do any of the commercial mounting media work? We routinely use aqueous PVA-based mountants (Gelvatol) for slides, but this does not work for zebrafish embryos for the long term. To add more spice to the question, the fish will have fluorescent proteins (probably eGFP and mCherry) and be labelled with other conventional dyes (cy5-phalloidin and DAPI) which I would like to maintain. Ideas??? Simon Watkins [email protected]
We mount crustacean embryos and larvae in methyl salicylate and then ring the coverslip with something permanent like fingernail polish. These have lasted >10 years. Also see “Rhodamine fluorescence after 15-year storage in methyl salicylate (https://doi.org/10.1017/S1551929500055383) Phil Oshel [email protected]
I echo Phil's suggestion of using some form of tissue clearing media, especially an organic solvent such as methyl salicylate. Alexa dyes indeed remain stable for a long time in this dehydrated environment. My main concerns would be (1) stability of the coverslip sealant in contact with the organic solvent, though one can't argue with the noted 10-year results so perhaps nail polish is more solvent resistant than I thought. Otherwise, UV cured glue such as Bondic seems quite solvent resistant. And (2) the lenses have a high refractive index (ca. 1.55) to match the mounting medium, so deep imaging at high NA will likely result in some aberrations. However, it should be fine at lower NA and magnifications suitable for samples of that size. Further information on clearing can be found at https://www.nature.com/articles/s41596-021-00502-8. Another implementation for long-term mounting of zebrafish for light sheet microscopy can be found at https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0161402. The main tricks from the methods were nanobody boosters for staining and Vectashield mounting media. However, the light sheet-specific mounting and 2-month stability (likely due to aqueous environment) may not be in line with what you're looking for. Kurt Weiss [email protected]
As a cover glass sealant, I suggest VALAP (and do not think an outer sealant is necessary. VALAP coverslip sealant is a 1:1:1 mixture of vaseline, lanolin and paraffin. It is prepared by mixing the components in a glass beaker or bottle on a heating plate. The mixture is applied by a small brush with a thin tip and hardens immediately. It can be reheated/reused many times. The mixture is preferred in cases where solvents like nail polish may interfere (for example, for live cell preparations). George McNamara [email protected]
I have to agree with Kurt that the coverslip sealing agent is important. Most fingernail polishes use acetone as a solvent, which is known to cause problems. For long-term storage, it might be a good idea to use a double-sealant method: something known as safe (I'd suggest agarose, but that's porous), surrounded on the outside by, for example, fingernail polish. This will seal nicely and have fewer issues with the clearing agent (methyl salicylate, etc.). Phil Oshel [email protected]
I agree with these solutions. However, the problem, so far, are the fluorescent proteins in a non-aqueous environment. I am sure the Alexa dyes will be fine. We will work on it and report back. I was hoping that there was a recipe I had missed that the whole world of fish cognoscenti uses. It would appear for the most part that they just breed more fish. Simon Watkins [email protected]
In full disclosure, I distribute for SunJin Labs in Australia. Something to try is an aqueous-based clearing mount. SunJin Labs produces prepared slides with 550 micron-thick tissue sections mounted in their Rapiclear® clearing reagent (https://www.sunjinlab.com/product-category/rapiclear/). They are quite robust and bright for a long period, so perhaps you can do something similar with Zebrafish embryos? The slides are sealed with their iSpacer® product, a double-sided sticker that is available in different thicknesses and seals very well. The Rapiclear® is very compatible with fluorescent proteins, but I'm not sure about long-term stability. You may be able to rustle up something yourself, but this would be an easy way to start. No zebrafish option yet, but SunJin Labs prepared slides are a great ready-to-go thick section alternative to thin section slides for testing microscopes or conducting training demos. Ben Hibbs [email protected]
In line with this, I had crude hand cut ~500 mm “sections” of tdTomato/GFP endothelial reporter mouse organs, embryonic day 15 to postnatal, mounted in ProLong™ Diamond (no commercial interest) and sealed with nail polish for several years in the fridge. I harvested the organs and put them in ethanol for 10 minutes and then PBS in the fridge until I mounted, let them set up, and sealed them. It's a little pricey for the ProLong and you need 3 or 4 sections the same height to balance the coverslip, but I've used them on multiple confocal microscope demos through the years. They are as bright now as when I mounted them. I would have thought they would have molded up long ago, but they seem fine in the fridge. Light penetration through dense tissues like kidney isn't the best, but lung and intestine are great. I suspect fish embryos are more like the latter. Brian Johnson [email protected]
Strange Hoescht Signal
Confocal Listserver
Hello microscopists, I am seeking advice on behalf of one of my core confocal users. He is concerned about images of nuclei produced by Hoechst labeling: a) where some areas show no detailed structure and appear saturated, though 8-bit signal levels are <200); b) and high background levels are present. I suspect that this may be related to the sample prep protocol, as other samples from the same and other labs have normal nuclear signal and the confocal is performing within specs. Our core does not provide sample prep service, so labs are responsible for this important step. A sample description is available at https://imgur.com/a/8J2Sliq. Cells are oligodendrocyte progenitor cells stained by immunocytochemistry, 4% PFA fixed for 10 min, and blocked with 30% donkey serum in PBS. Primary antibody staining (olig 2 goat) is overnight at 4°C with Alexa 598 as the secondary. A second antibody, PDGFr alpha with Alexa 488 as the secondary is also used. Hoechst is applied at a 1:10,000 dilution. Any insights/comments on how to improve the nuclear signal are appreciated. Best, Arvydas Matiukas [email protected]
First, I'd recommend doing a Hoechst-only control (no other reagents) to rule out bleedthrough from the other dyes into the blue channel. Second, what mounting medium are you using? Some, like Vectashield, have a blue autofluorescent background. Others have nucleic acid dyes in them, such as DAPI. Third, you give a dilution for Hoechst, but not the final concentration or the label time. I generally recommend 0.4 μg/mL for 5 minutes. If a higher concentration or much longer times are used, there can be a small degree of protein binding as well (likely due to charge affinity). Jason Kilgore [email protected]
The protocol does not describe permeabilzation. Saponin or Triton? This is critical. It would also be helpful to see the raw data with metadata included to assess instrument issues, but I agree it is most likely sample prep. Michael Cammer [email protected]
Confocal images of UV excitable dyes like Hoechst/DAPI tend to have low S/N. This is in part attributable to low throughput of the excitation light through the objective and/or lightguide, and may be further exacerbated by objectives with less-than-stellar chromatic correction, since both the excitation and the emission may not be focused in the proper plane and thus miss the pinholes. I don't see high background in the Hoechst image you posted, but background staining with Hoechst/DAPI can be reduced by a) not including these dyes in mounting media and b) washing with buffer 1-2 times after staining, since unbound Hoecsht/DAPI does fluoresce, albeit less brightly than DNA-bound dye. At low concentrations of dye this is less of an issue. You mention that the Hoechst was diluted 1:10,000, but not its final concentration. Abby Dernburg [email protected]
It is likely that your user seeds the cells on chamber slides with a thick glass bottom and adds mounting medium and then a coverslip. The correct procedure is to have the sample directly against the coverslip. Otherwise, the mounting medium ends up between the objective and the sample, leading to a longer light path through a potential/likely refractive index mismatch. Putting mounting medium between the objective and the sample leads to low reproducibility, since more or less of the PBS from staining is left behind and the amount of medium that is added adjusted accordingly. I suggest a procedure with cells seeded on #1.5 coverslips, either loose coverslips that can then be mounted on a slide or chamber slides with a #1.5 coverslip at the bottom. Another point: if the objective has a ring, it must be adjusted. This might salvage the sample. Air objectives would also be less sensitive to refractive index mismatch than high NA objectives. By removing the aberrations due to refractive index mismatch, it might be possible to obtain better resolution with an objective with a low NA. Sylvie Le Guyader [email protected]
I am happy to summarize and forward many thanks for all the comments and advice. These convinced my user that the confocal is performing OK and to concentrate on optimizing the settings. His main concern really was accurately counting the cells (nuclei) in 30 μm thick slices. Following the advice that nuclear signal looks better in widefield, we reimaged his slide at an increased pinhole (63x, ~3AU) and this addressed the issue. I also suggested enabling the z-correction of the collected signal, but its effect was less pronounced. In summary, reimaging under the suggested conditions improved the image quality and cell counts without re-mounting the sample. I forward the deepest appreciation for your expertise – you made his day! Arvydas Matiukas [email protected]
The better signal in widefield is primarily due to the use of more suitable light sources as widefield illumination is usually done with LEDs (or, historically, with arc or xenon lamps) that have very strong bands that match the excitation maxima of Hoechst and DAPI much better than a 405 laser. Abby Dernburg [email protected]
Agreed, but opening the pinhole mimics a 3D Gaussian blur, which is a helpful post-processing step that I often apply to confocal DNA images. You can usually spare some spatial precision in that channel, and it counteracts the terrible noise properties that a 1 Airy Unit-pinhole Hoechst image normally has. Doing it this way gets the same effect without bouncing the data files through Fiji. Tim Feinstein [email protected]
The 405 nm laser that almost every confocal has is extremely inefficient at exciting Hoechst and related nuclear markers. Solution - use more Hoechst. More than the low concentrations from protocols optimized for widefield imaging with more effective excitation and no pinhole. A bonus is that you often find a very faint diffuse signal from the whole cell that is sufficient to show its extent. Clearly high, or even any, concentration of a dye that intercalates DNA is less than ideal for live cells. An assumption is that the interference with normal cellular function may not be a problem in short-term live imaging experiments that start with the application of Hoechst. But does anyone know if this is true and if so the “safe” timescale, or is this merely wishful thinking? Jeremy Adler [email protected]
Just a note: Hoechst and DAPI do not intercalate, they bind to the minor groove. Which is absent in RNA and thus the two are DNA-specific. I perfectly agree with the notion that anything that binds to the DNA likely is not healthy for the cell. Steffen Dietzel [email protected]
Most people use Hoechst 33342, or sometimes 33258, both of which are like DAPI in wavelength, with a peak at 350 nm. But a different Hoechst dye, Hoechst 34580, excites higher with a peak at 392. It is more efficiently excited with 405 nm lasers. The protocol and binding characteristics for all three is the same. Jason Kilgore [email protected]
Brain Tissue Fixation
Microscopy Listserver
I just read a 2019 paper where the authors perfusion-fixed mouse brain tissue with 2% glutaraldehyde and 2% paraformaldehyde in 0.1M phosphate buffer, pH 7.3. What caught my attention was the use of phosphate buffer. I was taught that due to higher levels of calcium ions in the brain cells, the use of phosphate buffer would result in unwanted precipitates of calcium phosphate, hence the use of sodium cacodylate buffer to avoid precipitation. I would like to know your thoughts about this bit of “conventional wisdom” of phosphate buffer (use not preferred) versus sodium cacodylate buffer (use preferred) in the primary fixation of brain tissue. Tom Bargar [email protected]
One of the faculty here, Teresa Milner, is an expert in immunoEM of brain tissue. She has a chapter in one of the Methods in Molecular Biology series (Neurodegeneration: Methods and Protocols, Chapter 3). She perfuses with (brace yourselves) acrolein and paraformaldehyde in a phosphate buffer. Her images are spectacular. Lee Cohen-Gould [email protected]
I believe the precipitate occurs when the combination of glutaraldehyde, osmium, and phosphate is over 100 mM concentration. I typically perfuse brain with the combo fix (GA PF) and 100 mM Sorenson's phosphate with no issues or hassles from precipitate. I do use MgCl2 instead of CaCl2, as this will precipitate immediately. My problem is with some cacodylate buffer extraction, depending on the tissue, typically in the mitochondria (clear zones). I prefer phosphate buffer as it is more physiologic. Michael Delannoy [email protected]
I understand your reservations about phosphate buffer. I would argue the alternative shouldn't be cacodylate (expensive, toxic waste issues). HEPES or PIPES would be suitable substitutes for most tissues, although I can't specifically comment to brain tissue. I prefer HEPES because fixatives tend to acidify over time and HEPES at pH 7.4 is on the high side of the pKa (7.3) and therefore has more buffering capacity than PIPES (6.76). Thomas Phillips [email protected]
For regular TEM I still prefer the standard modified Karnovsky's fixative and sodium cacodylate buffer. Maybe with adding a little potassium ferrocyanide to the mix for certain studies in the brain. ImmunoTEM is a different horse and requires a different saddle! LOL. I find that cacodylate buffer does not work as well for immuno tagging work. Lita Duraine [email protected]
This is a very interesting thread since two colleagues and I were discussing this yesterday. We were wondering if anyone knows of or has published a comparative study that experimentally determines which fixation formulation and protocol produces the most artifact-free and lifelike brain tissues for EM (and I do realize these two terms are a discussion in themselves-probably for another thread). I have the impression that many people just use whatever protocol they learned as students or read in textbooks, and substantial differences seem to occur. Parameters such as time, temperature, osmolality, aldehyde concentration and, when employed, perfusion pressure surely must have been compared by someone somewhere. I found an interesting comment Karnovsky published in 1985 in which he states “…the fixative has obviously proved useful to many, even though a factual under-pinning for the rationale offered for its development was, and is, largely lacking.” Is it possible that EM experts have persevered for years using protocols that are not based on a through comparison of the formulation and method? I hope someone out there might be able to educate me on this interesting quirk of EM science. Chris Guerin [email protected]
See BP Arborgh et al., The osmotic effect of glutaraldehyde during fixation. A transmission electron microscopy, scanning electron microscopy and cytochemical study. https://doi.org/10.1016/S0022-5320(76)90009-5. It has some information on the topic. Geoff McAuliffe [email protected]
A book that I keep on hand in the lab goes into the various processing techniques and provides comparison images. MJ Dykstra et al., Biological Electron Microscopy Theory, Techniques, and Troubleshooting, Plenum Publishers, 2003. Lita Duraine [email protected]
I agree with Chris. There are so many older books and published literature on the various effects of chemical fixation that it can be bewildering. I read a great line from the intro of a version by Lewis and Knight (Practical Methods in EM, 1977) which stated, “The newcomer to electron microscopy is faced with an embarrassing range of techniques to choose from at every stage of the process from initial fixation to final staining of the ultrathin sections. “We all know that even if we have a basic protocol, there will be some empirical wrangling to achieve the best results (if funding and time permit). I think a symposium at all microscopy conferences on conventional biological and soft material fixation should be considered, with good-natured vigorous discussion and, hopefully, various series on methodology specific to detection (for example, basic imaging with good contrast and high resolution, immunoEM, negative staining, etc.). John Shields [email protected]
We ran across similar questions and decided to run an experiment to compare some conditions since we had some hES cells that differentiated into astrocytes and a liquid handling robot available. Here's a link to the details: https://www.heartlandbiotech.com/_files/ugd/c19c1a_878fcdbb04b04f0b8a5e184d8be44b18.pdf. After reviewing the results, it was suggested that starting dehydration at 30% may mitigate any negative effects caused by leftover phosphate buffer, so we will likely start at 50% and compare that variable in the next set of experiments. I agree that it would be nice to have a place to easily share this type of information. Tom Strader [email protected]