Archaeologists and other field scientists are a prime audience to receive wilderness medicine training due to the frequency of their work in remote field sites. Peixotto and colleagues (Reference Peixotto, Klehm and Eifling2021) mount a compelling argument in this issue for why archaeological sites, even if not strictly in a setting otherwise defined as “wilderness,” should be defined as wilderness activity locations. This mirrors other recent publications that argue that “wilderness” in its application to health care must be contextual (Hawkins Reference Hawkins and Hawkins2018). The unified definition of wilderness across many leading wilderness medicine texts is “areas where fixed or transient geographic challenges reduce availability of, or alter requirements for, medical or patient movement resources” (Hawkins Reference Hawkins and Hawkins2018; Hawkins, Millin, and Smith Reference Hawkins, Millin, Smith and Auerbach2017; Hawkins et al. Reference Hawkins, Millin, Smith, Cone, Brice, Delbridge and Myers2015). The relevance of this for many archaeological sites that would not be typically considered “wilderness” is immediately apparent. One of the challenges to obtaining proper training and advice is that wilderness medicine is an unregulated medical field (Hawkins and Winstead Reference Hawkins and Winstead2021). Not only does the content vary between wilderness medicine educational vendors, but the quality of that content varies at well. Best practice in contemporary wilderness medical education is to incorporate evidence-based medicine (EBM) into instruction. In brief, EBM privileges the importance of practice guidelines, collective clinical experience, and—most importantly—medical science in our teaching and medical practices, as opposed to anecdotal or legacy training (Evidence-Based Medicine Working Group 1992; Sackett et al. Reference Sackett, Rosenberg, Gray, Brian Haynes and Scott Richardson1996). Just as an archaeologist would recommend that instruction regarding archaeological topics be grounded in the highest-quality archaeology science and consensus conclusions, we as healthcare professionals similarly recommend that wilderness medicine training and practice be grounded in the highest-quality medical science and consensus conclusions.
The purpose of this review of common myths in wilderness medicine training is to highlight those teachings that remain prevalent in many schools and teaching curricula but that are not grounded in either scientific evidence or current best practices. Special attention is made to those teachings that might influence choices for first-aid-kit equipment or usage in an archaeology first aid kit or that would most likely come into play in medical care delivered in an actual archaeology environment. Every effort is made to explain the consensus-guideline and scientific-evidence sourcing to help explain why these recommendations are valid and current best practice, even if they deviate from what an archaeologist may be taught as a student in a non-EBM-based wilderness medicine class or publication.
MYTHS 1 AND 2: ANAPHYLAXIS MYTHS
Anaphylaxis represents the life-threatening extreme of allergic reactions. Whereas minor allergic reactions will generally involve one organ system (e.g., the skin or the gastrointestinal system) and have more minor symptoms (e.g., itching or nausea), anaphylaxis is multisystem and life threatening. Differentiating anaphylaxis from minor allergies is a component of nearly every level of formal wilderness medicine training (Hawkins and Winstead Reference Hawkins and Winstead2021). Courses also exist in many states specifically around anaphylaxis identification, and patients with potentiality for anaphylaxis receive training on that differentiation (Noble Reference Noble2016; North Carolina State Legislature 2009).
Myth 1: Using Medications Other Than Epinephrine
The contention that multiple drugs are useful as first-line agents for anaphylaxis is a myth. The reason that so much effort is put into laypeople being able to identify anaphylaxis is that it is life threatening and can kill prior to the arrival of emergency medical services (EMS), especially in a remote setting, and there is only one first-line medication that treats it: epinephrine (Song and Lieberman Reference Song and Lieberman2015). Many individuals are taught that epinephrine is merely a “bridging” medication, and that the definitive treatment for anaphylaxis is antihistamines (e.g., diphenhydramine) or steroids (e.g., prednisone). Although these medications do treat the symptoms of minor allergic reactions, they do not reverse the life-threatening components of anaphylaxis, and there is little evidence to support their use as first-line agents to treat or to prevent a rapid recurrence of anaphylaxis. Weak evidence supports possible use of antihistamines as a second-line agent, and there is no meaningful evidence to support the use of steroids (Shaker et al. Reference Shaker, Wallace, Golden, Oppenheimer, Bernstein, Campbell, Dinakar, Ellis, Greenhawt, Khan, Lang, Lang, Lieberman, Portnoy, Rank, Stukus and Wang2020).
Anaphylaxis kills by two pathways: (1) airway swelling that results in the inability to breathe and death from lack of oxygen, and/or (2) loss of blood pressure from dilation of blood vessels, which causes shock and death from circulatory collapse (blood pressure that is too low to sustain life). Whereas epinephrine directly reverses both of these, antihistamines and steroids do not to a sufficient degree to justify their primary use (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Therefore, when building a medical kit or considering treatment of allergic individuals who may develop anaphylaxis, the critical medication to have training in and use first is epinephrine. Some wilderness medical authorities argue that “anyone with a personal history of allergic reaction, and certainly a history of anaphylaxis, should carry epinephrine with them at all times,” as should any wilderness medicine provider, given its lifesaving potential (Groves and Cushing Reference Groves, Cushing and Hawkins2018:403). The Wilderness Medical Society (WMS) has formally recommended that nonmedical providers working in remote environments such as many archaeological field settings be trained to administer epinephrine (Gaudio et al. Reference Gaudio, Lemery and Johnson2010, Reference Gaudio, Lemery and Johnson2014). Not only do these environments carry risk of encounters with bites and stings, but food allergies are also common. It is not unreasonable to carry and use antihistamines and steroids for allergic reactions, including for symptomatic relief in anaphylaxis, but this is a secondary concern for treatment, and it does not carry significant lifesaving benefit. Epinephrine is the primary medication to reverse anaphylaxis, and it is our contention that every scientific program operating in remote environments should carry it and have personnel—even nonmedical personnel—trained in its use.
Myth 2: Epinephrine Can Be Misused
Epinephrine causes significant complications for organizations that appreciate the need to carry it but feel legally constrained by the fact that it is a prescription drug that can only be administered by a health-care provider (Curtis Reference Curtis2015).
As noted above, numerous states have enacted legislation supporting the training of laypeople to administer this medication to a third party, which helps with any concern that the administration of epinephrine to anyone other than the person to whom it was prescribed is misuse. The fact that many states have begun requiring that epinephrine be available in schools (an interesting scenario for school-originated scientific field programs) has created some legal compulsion to carry this medication, overriding the fear that using it could be legally compromising (Noble Reference Noble2016). A useful tool describing state rules is available through the Asthma and Allergy Foundation of America at https://www.aafa.org/epinephrine-stocking-in-schools. Most legislative interventions supporting the training of laypeople also support the ability of a physician or other prescribing clinician to prescribe epinephrine for a second party to be used on a third party. This means that at least one person on an archaeological field operation should be trained—and wherever legally required or available, certified—in epinephrine use, should obtain a prescription to carry epinephrine and the actual medication, and should practice its use prior to field deployment. Alternately, some states allow the institution to be the recipient of the prescription. This means that a clinician writes the prescription to the program and a pharmacy fills and distributes it to a representative of that program. As described in the educational review in this journal (Hawkins and Winstead Reference Hawkins and Winstead2021), it is best practice for programs to have a clinician medical advisor who can be accessed for the prescription—or, if the prescription is being carried by an individual, that person's primary care clinician can be approached to serve this role. Additionally, anyone with known allergies or risk of anaphylaxis should carry epinephrine.
Administration of epinephrine is always by injection, either by auto-injector or needle and syringe. Prior to the advent of auto-injectors, this drug was often administered subcutaneously (as a weal under the skin) and in the deltoid (shoulder). Both practices are now discouraged. With the understanding that this is intended to be a lifesaving intervention where time matters, epinephrine should be injected directly into the muscle rather than under the skin. Many authorities prefer the anterolateral thigh (halfway between hip and knee, halfway between front and lateral side of thigh). Both these preferences (into muscle vs. skin and into thigh vs. shoulder) are believed to speed absorption and simplify administration (Gaudio et al. Reference Gaudio, Lemery and Johnson2014; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Sampson et al. Reference Sampson, Munoz-Furlong and Campbell2006).
Most programs choose to use auto-injectors due to simplicity of carriage and administration, but it is a myth that the EpiPen brand is the only auto-injector (currently, there are at least four auto-injectors in the American market) or that auto-injected epinephrine is safer than epinephrine administered by needle and syringe (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Programs should consider all parameters, including cost and training, when choosing an epinephrine administration platform. Although auto-injectors are approved by the FDA and sold only as single-use tools, wilderness medicine authors have published techniques for extracting additional doses of epinephrine from a used auto-injector and administering them to a patient (Hawkins et al. Reference Hawkins, Weil, Baty, Fitzpatrick and Rowell2013; Robinson and Lareau Reference Robinson and Lareau2016). This is useful given that many patients require more than one dose of epinephrine. Although this might be helpful in an emergency, the authors advise that programs carry sufficient epinephrine and that this method only be used when supplies are exhausted—not as a planned technique.
MYTH 3: SNAKEBITE MYTHS
A number of incorrect assumptions can delay appropriate care for snakebite patients. Within the United States, fatalities from snakebites are rare (six deaths per year), but they do result in nearly 10,000 hospital visits yearly. Internationally, about 100,000 people die annually from snakebites. Knowing how to care for a snakebite is essential for those working in the outdoors (Forrester et al. Reference Forrester, Weiser and Forrester2018; O'Neil et al. Reference O'Neil, Mack, Gilchrist and Wozniak2007).
The “cut and suck” method of care was refuted decades ago. Today, studies suggest that this method causes the introduction of bacteria into the wound, creating the risk of superinfection or abscess formation for the injured person, and it poses a risk for the rescuer through absorption of venom by the oral mucosa (Alberts et al. Reference Alberts, Shalit and LoGalbo2004; Kanaan et al. Reference Kanaan, Ray, Stewart, Russell, Fuller, Bush, Martin Caravati, Cardwell, Norris and Weinstein2015). Mechanical suction is also ineffective. Studies show that this method of snakebite care increases local tissue damage and causes tissue necrosis, thereby complicating care (Bush Reference Bush2004; Bush et al. Reference Bush, Hegewald, Green, Cardwell and Hayes2000; Kanaan et al. Reference Kanaan, Ray, Stewart, Russell, Fuller, Bush, Martin Caravati, Cardwell, Norris and Weinstein2015). This is important for scientists building medical kits because mechanical suction tools are still marketed. Although they may seem attractive for inclusion in a kit, they are harmful and should be avoided.
Tourniquets are also an enduring treatment myth related to snakebites. There is no medical literature to recommend placement of a tourniquet on victims of snakebites in North America. In addition to the possibility of leading to ischemia and gangrene of the affected limb, tourniquet placement results in higher amputation frequency and complicates the care of a patient upon arrival at the emergency department (ED; Bush and Kinlaw Reference Bush and Kinlaw2015).
On the subject of circumferential bandaging for snakebites, a “pressure dressing” or “compression wrap” is often described and also often misunderstood. When dealing with bites from Australian elapids (e.g., Eastern Brown Snake), the technique of placing a pressure bandage is supported by medical literature. The actual application, however, is difficult and often done incorrectly by laypersons (Rogers and Winkel Reference Rogers and Winkel2005). The usefulness of this technique is primarily due to the neurotoxic components of the venom of these snakes, and although it is not technically a tourniquet, a pressure dressing has an impact on the flow of blood and lymph from an affected limb. Although the United States does have elapids, such as coral snakes, the species differences are sufficient that it is not clear whether pressure dressings are effective. Currently, there is appropriate evidence of benefit only with respect to Australian elapids. This raises the importance of regionally specific training and awareness for snakebite care. With the exception of this pressure-dressing issue and the cleaning of a wound, all recommendations in this section are valid for all areas, but there are important differences in risk and snake types in different regions. For example, some areas (especially islands, such as Hawai'i and Ireland) have no endemic venomous snakes, whereas other areas (such as India) have a massive number of particularly venomous snakes. Put in perspective, on average, about a half a dozen people die every year from snakebites in the United States—either because they refuse antivenin or due to anaphylactic reaction to the venom—whereas on average, about 11,000 people die in India every year (Anuradhani et al. Reference Anuradhani, Wickremasinghe, de Silva, Gunawardena, Pathmeswaran, Premaratna, Savioli, Lalloo and de Silva2008). Knowing the specific risks of the country and area in which an archaeologist will be working is critical to assessing snakebite risk.
In addition to the oral suction, mechanical suction, and tourniquet myths, it is imperative not to make any incisions, ice the wound, use electrical current at the site, or cut the head off of the snake and bring it to the ED (a decapitated head can still strike and deliver venom; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Additionally, although nonsteroidal anti-inflammatories (NSAIDs, such as ibuprofen and naproxen) are recommended for a variety of ailments, these must not be given to snakebite victims (Auerbach et al. Reference Auerbach, Constance and Freer2013).
So, what should be done? Prevention is much easier than treatment. Paying attention to surroundings and staying well away from encountered snakes are both prudent measures. Treatment of a bite begins with staying calm and moving the patient away so as to prevent a second bite. Rescuers should attempt to clean the site with water and soap to remove any venom at or near the surface of the skin (in Australia, however, wounds should not be cleaned because venom is used to identify the snake and appropriate antivenin). Rescuers should note the time of the bite and take a photo of the snake from a safe distance (if possible). Removal of any constricting clothing or jewelry, especially near the bite, is a critical step because swelling will occur. Rescuers should calmly walk the patient to the closest form of transportation and evacuate them to the nearest medical center. If possible, rescuers should attempt to call ahead to ascertain whether the facility has antivenin (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017).
A frequent question is whether archaeology field medical kits should contain antivenin. For most programs, such deployment is not appropriate. First, as a prescription medication, it would require a prescribing clinician imbedded within the program, and in most states, its use is beyond the scope of practice of nonclinician health-care professionals such as emergency medical technicians (EMTs) and paramedics (Hawkins and Winstead Reference Hawkins and Winstead2021). Second, each vial of antivenin costs between $1,200 (ANAVIP) and $3,200 (CroFab), and treatment of most bites takes six vials, and often more. Even starting antivenin treatment and carrying a vial or two would be a massive financial commitment from a program for a condition that carries little chance of death in the United States. This calculation might change in other countries—another reason why site-specific risk management is important. But that also raises the point that antivenin is species specific and must match the type of snake causing the bite. This means that in areas with multiple species, it might be necessary to carry multiple antivenins. Currently, field-deployed antivenin is not a standard part of most archaeological field kits. If unique local features made it worthwhile, however, it might be an argument for expedition-imbedded medical clinicians.
MYTH 4: THE MYTH OF SPINAL IMMOBILIZATION
Perhaps the most ubiquitous component of twentieth-century trauma care was the concept that the cervical spine (neck bones) needed to be immobilized. After ensuring that breathing and pulse were present, the first person caring for a trauma patient would clamp hands on the side of the individual's head and then focus exclusively on preventing the neck from moving. This would be done until a rigid cervical collar could be placed, with the same goal in mind. Backboards, initially intended to be patient movement and extrication tools, were also repurposed to be applied with the goal of “immobilizing” the spine.
Despite being a universal element of emergency care since the 1970s, there has never been a scientific study demonstrating that these interventions actually help prevent further injury. In fact, studies increasingly show evidence of harm from this intervention, as well as the principle that the thoracic and lumbar spine (back bones) should also be immobilized using a rigid board (Hauswald Reference Hauswald2013; Hauswald et al. Reference Hauswald, Ong, Tandberg and Omar1998; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Smith et al. Reference Smith, Bledsoe, Nicolazzo and Hawkins2018).
In the 1990s, protocols were issued selecting out patients who would not require spinal immobilization in the field (selective spinal immobilization; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). These protocols were supported by the publication of NEXUS (National Emergency X-Radiography Utilization Study) in 1998, and the validation of its criteria in 2000, demonstrating which patients would not require radiological imaging in an ED (Hoffman et al. Reference Hoffman, Wolfson, Todd and Mower1998, Reference Hoffman, Mower, Wolfson, Todd and Zucker2000). The presumption was that if concern about spinal injury was so low that imaging was not deemed necessary for a patient, that patient also would not need protective immobilization, applying the same criteria from the ED. In the two decades that followed, more and more studies failed to show the benefit of immobilization for any patient. A new principle of spinal motion restriction (SMR) appeared, which argued that strict immobilization was not required. Instead, only reduction of nonphysiological (normal) gross motion that caused pain was necessary (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Coincident with this was a historic change in EMS agencies around the world to discontinue use of rigid long spine boards altogether as medical tools for “immobilization”—although they might still be useful as extrication devices. By 2018, in the first edition of Wilderness EMS, the position was taken that there was no requisite need for any immobilization of any level of the spine, and the term “SMR” was replaced with the more goal-oriented terminology of “spinal cord protection” (SCP; Smith et al. Reference Smith, Bledsoe, Nicolazzo and Hawkins2018). This change acknowledged that there was, in fact, no evidence suggesting that physiological, nonpainful motion was the cause of any subsequent injury. The current state-of-the-art guideline in wilderness trauma care suggests that SCP should be accomplished by passive SMR (such as the use of soft collars)—or, in the case of conscious patients, instructing them not to move their necks in painful ways—and avoiding painful or nonphysiological movement in transport (Smith et al. Reference Smith, Bledsoe, Nicolazzo and Hawkins2018). In 2019, the WMS stated in its evidence-based clinical practice guidelines that “there is no requisite role for commercially made or improvised rigid cervical collars in an out-of-hospital environment,” that there was no requisite need for manual attempts at cervical spine immobilization, and that backboards should not be applied as a medical tool with an immobilization goal (Hawkins et al. Reference Hawkins, Williams, Bennett, Islas, Kayser and Quinn2019:S90). They recommended vacuum splints and possibly soft collars (widely available through medical supply vendors) as suitable replacements for immobilization tools, more in line with a goal of SCP via SMR.
MYTH 5: MYTHS REGARDING HEAT ILLNESS
Heat illness, like many other medical conditions, lies on a spectrum. It is a common risk for field archaeologists and those participating in other outdoor field work programs, especially in desert, tropical, or humid environments. Two major myths exist in the identification and treatment of heat illness.
A common but outdated teaching is that cessation of sweating marks the threshold for transition from heat exhaustion to heat stroke. Identifying that threshold is critically important because heat stroke is a true medical emergency that requires immediate intervention (ideally immersion in cold water). Pathophysiologically, it has become apparent that some patients enter into a full heat stroke syndrome while still having some sweating capacity. Operationally, it is difficult to assess continued sweating, especially when someone's clothes are soaked in sweat. The most important defining threshold for identifying a heat illness case as heat stroke is alteration in mental status (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Lipman et al. Reference Lipman, Gaudio, Eifling, Ellis, Otten and Grissom2019; O'Brien et al. Reference O'Brien, Leon, Kenefick, O'Connor and Auerbach2017; Schimelpfenig et al. Reference Schimelpfenig, Richards, Tarter and Hawkins2018).
Second, in the past, hydration strategies were promoted that called for the ingestion of a fixed volume of fluid regardless of thirst. The rationale for this, often taught in wilderness medicine classes, was that once thirst was triggered, a certain degree of dehydration was already present, and an individual was already at further (but preventable) risk for heat illness. This appears, however, to be a myth. There is no published scientific evidence supporting this physiological concept or its associated forced-drinking strategy. Furthermore, forced drinking has been shown to have deleterious effects such as hyponatremia (low salt levels) or water intoxication if the volume ingested is too high for the individual (Schimelpfenig et al. Reference Schimelpfenig, Richards, Tarter and Hawkins2018). Current clinical practice guidelines from WMS and EBM-oriented textbooks argue for ad libitum drinking, or drinking when desired or thirsty, as the best strategy for safe hydration (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Lipman et al. Reference Lipman, Gaudio, Eifling, Ellis, Otten and Grissom2019; Schimelpfenig et al. Reference Schimelpfenig, Richards, Tarter and Hawkins2018). Predictive algorithms are available that provide guidelines for what hydration needs might be expected in certain environments and activities. These can help with planning and expectations, but they should not be used as fixed requisite regimens (Montain et al. Reference Montain, Latzka and Sawka1999; Rodriguez et al. Reference Rodriguez, Di Marco and Langley2009).
It is worth noting that hydration strategies are also critical for illnesses with diarrhea and vomiting, which can be quite common on archaeological expeditions. This is a justification for including antiemetic (antivomiting) medications in a medical kit, such as ondansetron in oral dissolving tablet form. Whereas controlling vomiting is necessary to allow for ingestion of needed fluids, controlling diarrhea is not. Consequently, it is generally best to allow diarrhea to proceed without pharmacological intervention while ensuring adequate fluid and electrolyte ingestion (Davis and Mell Reference Davis, Mell and Hawkins2018; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). It is worth noting that commercially available “rehydration” products often have excessive sugar content and unnecessary components (food dyes). The World Health Organization publishes a rehydration formula (Table 1) that can be easily mixed in most environments and that does not contain any food dyes or as much sugar as most commercially available products.
Sources: Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Rehydration Project 2014; UNICEF 2016; World Health Organization 2005.
MYTH 6: CPR MYTHS
Following its development and promotion by Peter Safar (University of Pittsburgh) from 1957 to 1960, cardiopulmonary resuscitation (CPR) has become a mainstay of both layperson and medical professional resuscitation. By the turn of the twenty-first century, CPR had remained a requisite part of intervention for patients in cardiopulmonary arrest, but its efficacy was beginning to appear mythical. In 1999, Brandeis University sociologist Stefan Timmerman published Sudden Death and the Myth of CPR describing this situation (Timmerman Reference Timmerman1999). Recovery rates from those receiving CPR were less than 5%, and the action seemed to be more of a ritual than an effective medical intervention.
In the years that followed, however, a renaissance in resuscitation science occurred. Researchers began looking closely at what worked and what did not with CPR and other resuscitative interventions. Nationally leading communities used this new evidence to implement practices such as high-performance (“pit crew”) CPR, telecommunicator CPR, increased access to automated external defibrillators (AEDs), and extensive layperson CPR education, which led to survival rates for cardiac arrest as high as 62% (Sudden Cardiac Arrest Foundation 2014). Perhaps even more significant than these interventions was the belief among those performing CPR that it was not mythical. Not only is it a myth itself that CPR is a myth, but in fact, elements critical to its success include performing it correctly, believing it will work, and applying algorithms supported by best available evidence. Taking a CPR class from a reputable vendor—such as the American Heart Association, the American Red Cross, or other programs teaching evidence-based CPR—ensures this, as does keeping up with recertification regimens.
There may be differences, however, in the way CPR is applied in wilderness settings such as an archaeological field site (see Peixotto et al. [Reference Peixotto, Klehm and Eifling2021] on advantages to viewing archaeological sites as wilderness activity sites). Traditional CPR teaching suggests that CPR can be discontinued in one of three situations: the patient recovers a pulse, the rescuer becomes exhausted, or care is transitioned to someone of equal or higher training. In a wilderness setting, transitioning care to an EMS provider, even a wilderness EMS (WEMS) provider, may be unlikely to occur for hours or even days—an untenable situation for out-of-hospital layperson CPR. Moreover, in cases where timely transfer of care or arrival of Advanced Life Support (ALS) service will not occur, performing CPR to exhaustion puts rescuers themselves at risk in a remote environment, which by definition is hostile and unsafe in and of itself (Davis et al. Reference Davis, Abo, McClure and Hawkins2018). Recognizing this, WMS CPR practice guidelines maintain that CPR can be discontinued after approximately 30 minutes if pulses have not returned (Forgey Reference Forgey2006).
Specific environmental concerns that may be encountered in remote archaeological and other outdoor field work programs can prompt deviations from traditional CPR. For example, compression-only CPR has become a more widespread training modality. Drowning victims and very young children, however, are specific exclusions to compression-only CPR. According to 2019 WMS Clinical Practice Guidelines, “compression-only CPR is likely to be of little to no benefit in drowning resuscitation.” For programs operating in aquatic environments, every effort should be made to obtain training in full CPR and perform full CPR during a drowning resuscitation (Schmidt et al. Reference Schmidt, Sempsrott, Hawkins, Arastu, Cushing and Auerbach2019). It should be noted that some programs now teach a C-A-B algorithm (Circulation-Airway-Breathing), but in a remote drowning situation, the traditional A-B-C algorithm should be utilized, due to the critical importance of rapidly supplying oxygenation and ventilation for these patients (Schmidt et al. Reference Schmidt, Sempsrott, Hawkins, Arastu, Cushing and Auerbach2019). WMS practice guidelines also recommend that it is reasonable not to initiate rescue (versus body recovery) or resuscitation interventions (including CPR) when there is known submersion time of greater than 30 minutes in warm water (warmer than 6°C/43°F) or 90 minutes in cold water (colder than 6°C/43°F; Schmidt et al. Reference Schmidt, Sempsrott, Hawkins, Arastu, Cushing and Auerbach2019).
Similarly, traditional CPR teaching does not apply to cardiac arrests from lightning (as discussed below) and in the context of severe hypothermia. In severe hypothermia, if initial shock from an AED is unsuccessful in a patient with core temperature below 30°C, the patient should be rewarmed to above 30°C before further attempts are made. It is important to recognize that patients have recovered from exceptionally low core temperatures in cardiac arrest, and that resuscitation attempts should be undertaken and continued regardless of measured core temperature. The lowest temperature from which hypothermic humans can be resuscitated is not known, and the idea that any temperature equates to “obvious death” is mythical (Dow et al. Reference Dow, Giesbrecht, Danzl, Brugger, Sagalyn, Walpoth, Auerbach, McIntosh, Némethy, McDevitt, Schoene, Rodway, Hackett, Zafren, Bennett and Grissom2019). It is the case that unconscious patients should be handled gently to prevent ventricular fibrillation (a nonsustaining heart arrhythmia), but if a patient in this situation does lose a pulse due to this, management should be the same as any other hypothermic cardiac arrest. Many training programs teach the following dictum: “No one is dead until they are warm and dead.” This is sometimes valid, because warming a patient who is or appears to be dead may unexpectedly result in that patient's survival. Some patients, however, are cold and dead, and they will remain so regardless of intervention, making this dictum mythical as an absolute. Examples of obvious death include decapitation, torso transection, open head injury with loss of brain matter, chest walls too stiff for CPR (although some stiffness is expected in profound hypothermia), or ice in the airway. Another difference for CPR in hypothermia is the reality that pulses may be very slow yet still present. For this reason, a pulse check is recommended for one full minute—longer than some traditional CPR teachings.
Lightning provides an interesting physiological circumstance. Patients in cardiac arrest from lightning strike may experience a return of pulse rapidly but continued respiratory arrest due to electrically mediated paralysis of the diaphragm. Such patients may need prolonged rescue breathing, and they may have high rates of recovery. An important additional consideration in lightning strikes is that traditional training for “triage,” or choosing which patient to care for given a multipatient scenario, is reversed. In a typical multipatient scenario where resources must be allocated, patients who are in cardiac arrest are often permitted to remain dead, allowing resources to be directed toward those for whom rapid or sustained intervention is more likely to result in survival. Because a short duration of CPR often results in full recovery of lightning victims due to the unique pathophysiology of the electrical strike, one minute of CPR is warranted for such patients as a highest priority—a contortion of standard priorities sometimes known as “reverse triage” (Davis et al. Reference Davis, Engeln, Johnson, McIntosh, Zafren, Islas, McStay, Smith and Cushing2014).
Based on traditional algorithms for managing cardiac arrest, and balanced against participant health parameters and weight limitations, we do recommend that AEDs be a part of the medical equipment of archaeological field programs. Ultimately, CPR interventions must balance rescuer safety with potential utility in light of a patient's condition and environmental threats.
MYTH 7: DISLOCATION REDUCTION MISUNDERSTANDINGS
It is probably a myth that all dislocations in a remote setting must be brought to a health-care facility for reduction. Studies suggest that shoulder reductions, for example, can be safely performed even by nonmedical personnel in remote settings (Bokor-Billmann Reference Bokor-Billmann, Lapshyn, Kiffner, Goos, Hopt and Billman2015; Ditty et al. Reference Ditty, Chisholm, Davis and Estelle-Schmidt2010; Smith et al. Reference Smith, Bledsoe, Nicolazzo and Hawkins2018). It also is intuitively apparent that some minor dislocations, such as finger or patellar (knee cap) dislocations, reduce spontaneously or are immediately reduced by either patients themselves or by their comrades. On the other hand, dislocation sites such as hips, elbows, or the knee joint (not kneecap) are unlikely to be successfully managed in the field.
Ideally, dislocation reductions should only be attempted after specific training. Numerous wilderness medicine schools and curricula (as described in Hawkins and Winstead Reference Hawkins and Winstead2021) include training in dislocation reduction as part of the “wilderness” component of their training. From a scope of practice perspective, however, in many states, dislocation reduction is confined to only a few types of health-care professionals. This places dislocation reductions in a contested zone. For example, a state-credentialed EMT might not be permitted to perform the skill by state rules, but the same EMT going through a wilderness EMT class might be trained in the procedure (Hawkins Reference Hawkins and Hawkins2018). A patient-centered perspective might argue that individuals who have specific training in dislocation reduction should be prepared to implement those skills in environments where delayed access to formal medical care is likely. Reduction in length of time of dislocation reduces complications in multiple ways. In another perhaps more legalistic perspective, the scope-of-practice issue speaks to the benefit of having medical oversight for programming if interventions beyond first aid are planned. Medical oversight and advice from a board-certified and state-licensed physician for all medical programming is a best practice (Millin et al. Reference Millin, Johnson, Schimelpfenig, Conover, Sholl, Busko, Alter, Smith, Symonds, Taillac and Hawkins2017; Warden et al. Reference Warden, Millin, Hawkins and Bradley2012).
MYTH 8: THE MYTH OF TOURNIQUET DANGER
Most wounds encountered in a wilderness environment will require very little hemorrhage (bleeding) control because they will consist primarily of scrapes and minor lacerations. We cover this further in Myth 10: Wound Management Myths. Although not as common as a scrape or small cut, a traumatic injury resulting in uncontrolled hemorrhage is an immediate medical emergency in the backcountry. The U.S. military identified the importance of hemorrhage control in combat operations and reconfigured its trauma assessment algorithm from the “ABC—Airway, Breathing, Circulation” approach to the “MARCH—Massive hemorrhage, Airway, Respirations, Circulation, Head trauma/hypo/hyperthermia” acronym, which emphasizes the primacy of preventing exsanguination (Drew et al. Reference Drew, Bennett and Littlejohn2015; Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Many wilderness medicine education companies and texts have incorporated the lessons learned by the military and are teaching MARCH due to this focus (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Smith et al. Reference Smith, Bledsoe, Nicolazzo and Hawkins2018).
For severe bleeding or arterial bleeds associated with traumatic injuries, such as open fractures or severe limb injuries that cannot be controlled by direct pressure or hemostatic agents, the use of a tourniquet approved by the Committee on Tactical Combat Casualty Care (TCCC) should be considered. Death from exsanguination can occur within minutes, and a number of studies on tourniquet use in prehospital environments have shown their effectiveness. Two studies showed survival rates of 90% and 96% for extreme injuries when tourniquets were placed early, and prior to the patient going into shock (Kragh et al. Reference Kragh, Walters, Baer, Fox, Wade, Salinas and Holcomb2009, Reference Kragh, Littrel, Jones, Walters, Baer, Wade and Holcomb2011).
The primary and enduring myth of hemorrhage control was that a tourniquet was the “weapon of last resort.” Instead, a stepwise approach was favored that included manual direct pressure, the use of pressure points, elevation of an injured limb, and then finally, placement of a tourniquet. The reasoning for this was that once a tourniquet is placed, the likelihood of permanent damage, infection, or amputation of a limb distal to the tourniquet was increased (Drew et al. Reference Drew, Bennett and Littlejohn2015).
Tourniquets have long been used in controlled environments such as an operating room for upward of two hours with no injury to the distal limb. Prior to the conflicts in Afghanistan and Iraq, knowledge of the effectiveness of commercial or improvised tourniquets was lacking. This is no longer the case given that application of commercial tourniquets for life-threatening arterial bleeds is now considered the primary intervention to save lives in austere environments. As with tourniquet application in the operating room, there is minimal risk of complication related to placement for up to two hours in a field environment (Ostman et al. Reference Ostman, Michaelsson, Rahme and Hillered2004; Quinn et al. Reference Quinn, Wedmore, Johnson, Islas, Anglim, Zafren, Bitter and Mazzorana2014; Tourtier et al. Reference Tourtier, Palmier and Tazarourte2013). Additionally, a review conducted in the Journal of Trauma and Acute Surgery in 2015 found that improvised tourniquets were as capable as commercial tourniquets in stopping arterial bleeding, although they tend to be more painful (Stewart et al. Reference Stewart, Duchesne and Khan2015). However, improvised tourniquets by definition are more difficult to assemble in an already stressful experience, and some mass casualty incidents have suggested that they may be less effective in real-world scenarios. Consequently, we recommend that every first aid kit contain a commercial tourniquet along with instructions for its use. In summary, based on the new evidence, the concern regarding amputation or other limb injury has lessened and early use of tourniquets is now encouraged to save lives.
Recommended commercial tourniquets include the Combat Application Tourniquet (C-A-T) and the SOF Tactical Tourniquet (SOF-T). Wilderness Medicine Magazine published a helpful review in 2019 of the recently expanded list of commercial tourniquets recommended by the Committee on Tactical Combat Casualty Care (Bennett and Christensen Reference Bennett and Christensen2019). Important considerations when placing an improvised tourniquet to control arterial bleeding include material with rounded edges, a minimal width of 1.0–1.5 inches, proximal placement within 2–3 inches of the wound (true also for commercial tourniquets), and the use of a sturdy windlass device that can be secured. We recommend that archaeology programs choose one TCCC-approved commercial tourniquet from the Wilderness Medicine Magazine list, train on its use, and include it in their medical kit.
MYTH 9: DROWNING MYTHS
As of 2002, the common perception that drowning equates death became a myth. That year, the Second World Congress on Drowning defined drowning as “the process of experiencing respiratory impairment from submersion/immersion in liquid” (Sempsrott et al. Reference Sempsrott, Schmidt, Hawkins, Cushing and Auerbach2017). Patients can survive the process of drowning, and most do. This gives us a much better understanding of the disease process of drowning, and how to interrupt it along the spectrum from initial respiratory distress to final irreversible death.
This definition has been accepted by nearly all consensus-setting organizations, which also argue that modifying terminology such as “near drowning,” “dry drowning,” “secondary drowning,” and other analogues should not be used. There are only three outcomes to the drowning process—death, survival with morbidity, and survival without morbidity. This means that the only routinely accepted modifiers to drowning are “fatal drowning” and “nonfatal drowning” (Sempsrott Reference Sempsrott and Hawkins2018). In particular, “secondary drowning” and “dry drowning” as concepts have penetrated deeply into popular dialogue through nonscientific print and social media. Myths around these alleged conditions have generated a great deal of fear and misunderstanding, and combatting such myths is crucial. Drowning is too common, too preventable, and too often lethal to allow mythical teachings to perpetuate, especially among scientific communities committed to evidence and rational dialogue (Hawkins, Sempsrott, and Schmidt Reference Hawkins, Sempsrott and Schmidt2017; Quan et al. Reference Quan, Hawkins and Schmidt2018; Schmidt et al. Reference Schmidt, Sempsrott and Hawkins2018).
As an example of why this is important for archaeology and outdoor fieldwork programs, the myth of drowning equating death has important implications for documentation of drowning incidents in risk management. The appropriate documentation of all individuals who experience respiratory impairment in a liquid medium as a “drowning” patient enables better risk management as well as the quantification of and intervention in such incidents in the future.
From an interventional standpoint and as part of the medical “tool kit,” we recommend water safety training and the assurance that personnel who will be interfacing with water environments have adequate swimming ability. Starfish Aquatics Institute and Landmark Learning have developed an innovative Wilderness Lifeguard course that translates front-country lifeguarding practices into the wilderness environment.
Another drowning myth that has been perpetuated regards escaping from submerging vehicles. Some studies suggest that a surprisingly high percentage of all fatal drownings occur via submerging vehicles. This means that archaeology and outdoor fieldwork expeditions could be considered at particularly high risk for this drowning scenario, given that site work often involves driving to and from locations (Sempsrott et al. Reference Sempsrott, Schmidt, Hawkins, Cushing and Auerbach2017). Many recommendations that appear in popular media about how to escape are incorrect, and they perpetuate myths that, paradoxically, increase the risk of death (Hawkins Reference Hawkins2015a). These recommendations include (1) allowing the passenger compartment to fill with water and the vehicle to sink before opening the door to escape or (2) breathing trapped air in the passenger compartment (Hawkins Reference Hawkins2015b). Such suggestions are not evidence based, and they contribute to submerged-vehicle drowning deaths. Research done by Giesbrecht suggests that the best EBM algorithm for escaping from a submerging vehicle is unfastening seatbelts, opening windows, releasing children from seat belts or car seats and bringing them to one of the windows, pushing children out the window, and then doing the same with the adults (Hawkins Reference Hawkins2015a). It is imperative to take all these steps as quickly as possible because the evidence suggests that most vehicles only float for 30–120 seconds before sinking (Sempsrott Reference Sempsrott and Hawkins2018).
MYTH 10: WOUND MANAGEMENT MYTHS
There are a number of myths related to management of skin injuries in austere environments, from minor wounds, such as blisters and abrasions, to life-threatening damage that results in major bleeding (Myth 8). To address these myths, it is first important to focus on what should be done with wounds: stop the bleeding, irrigate, bandage to protect, and evacuate if needed (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017; Simon Reference Simon and MacDonald2019). Below are the steps in order of priority when it comes to care of any wound in the wilderness.
Proper Wound Management
Stop Bleeding
Most wounds will stop bleeding without any specific action. For larger wounds, common practice was to engage in a gradual increase in efforts through direct manual pressure, the elevation of an extremity, the application of a pressure dressing, and as a last resort, the application of a tourniquet. As noted earlier in Myth 8, newer first aid algorithms such as MARCH argue for placement of a tourniquet as a first step in arterial bleeding control (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). However, direct pressure at the site of the wound is considered the gold standard in the control of bleeding, and this should be used first in almost every instance. There has only been one study on the efficacy of pressure points, and it found them lacking in comparison to other approaches. In practice, pressure points are extremely difficult to locate and apply for any period of time; the utility of this technique is a myth and it is no longer recommended (Drew et al. Reference Drew, Bennett and Littlejohn2015; Quinn et al. Reference Quinn, Wedmore, Johnson, Islas, Anglim, Zafren, Bitter and Mazzorana2014). There are no studies measuring the effectiveness of limb elevation to assist in hemorrhage control, but as there is little risk and some value, doing so is recommended as long as it does not delay or complicate the delivery of direct manual pressure, application of pressure dressings, or application of a tourniquet. Pressure dressings have been shown to provide effective hemostasis, and they are the logical next step once bleeding is controlled through manual pressure (Drew et al. Reference Drew, Bennett and Littlejohn2015). Tourniquets are effective in stopping severe arterial hemorrhage, and they save lives. Wound packing can also be an effective intervention for deep wounds. The American College of Surgeons has instituted a “Stop the Bleed” campaign (www.stopthebleed.org), which offers short (approximately two hours long) courses for laypersons in these interventions—all of which would be excellent additions to the cognitive “tool kit” of an archaeology team. In terms of actual equipment, as noted earlier, every team should carry a commercial tourniquet.
Irrigate
Cleaning the wound through adequate irrigation is often overlooked, but this is likely the most critical factor in wound healing and infection prevention (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). Two elements are important in this step of wound management: timeliness and pressure. It makes sense to begin cleaning and irrigating the wound as soon as possible so as to limit contact time with potential contaminants. Applying pressured irrigation using the cleanest fluid available (at least 1 L) will remove much more debris than pouring liquid in an undirected fashion or placing the wound in a nonmoving body of water (which is not recommended; Quinn et al. Reference Quinn, Wedmore, Johnson, Islas, Anglim, Zafren, Bitter and Mazzorana2014). Drinking water is the ideal form of clean fluid. A plan, the necessary equipment (e.g., syringe), and access to clean water are often overlooked considerations when building a scientific expedition medical kit.
Bandage to Protect
There are many techniques for bandaging a wound, and if done correctly, these will promote healing and protect it from further contamination. Using the cleanest material available, bandaging should be conducted as soon after irrigation as possible. Inclusion of an antibiotic ointment is also recommended, not so much to kill bacteria in the wound, but to protect the wound from additional contamination and to prevent the bandage material from adhering to the wound. Due to the prevalence of reactions to neomycin in Neosporin (sometimes called triple antibiotic), a simple bacitracin or a bacitracin-polymyxin combination such as Polysporin is recommended as a medical-kit antibiotic cream (Hawkins, Simon, et al. Reference Hawkins, Simon, Beissinger and Simon2017). There are a number of closure techniques for wounds, but unless an individual is sure that all contaminants have been removed and unless that person is trained in these techniques (application of skin glue, suturing), it is best to dress and protect the wound until a professional can assess and close it properly. In the event of a deep wound that requires prolonged field care prior to evacuation, the use of a wet-to-dry dressing will help maintain the viability of the deeper tissue. These dressings should be changed multiple times daily. Studies have indicated that wounds can be safely closed up to six hours after they occur without significant increase in infection risk (Quinn et al. Reference Quinn, Wedmore, Johnson, Islas, Anglim, Zafren, Bitter and Mazzorana2014). High-risk wounds should be left open. These include any human or animal bites, puncture wounds, crush wounds involving a large amount or tissue, and wounds to the hands or feet.
Evacuate if Necessary
Evacuation should be considered for abrasions located on the bottoms of the feet, on the palms of the hands, the genitalia, or if an abrasion begins to show signs of severe infection. People with any lacerations that involve tendons, ligaments, or nerves or that cause severe bleeding should be evacuated immediately. Large lacerations and puncture wounds, especially deep ones, will likely require evacuation because these are difficult to clean and they are highly susceptible to infection. Additionally, any animal bites or wounds that are grossly contaminated with organic matter should be evaluated quickly by a medical professional. For wounds with high risk of infection, a tetanus vaccination should be considered. For any interaction with an animal (bite or scratch), rabies postexposure vaccinations may be needed.
Myths about Blisters
Finally, the myth that is most likely to affect every individual pertains to the care of the common blister. A blister, whether on the hand or foot, is one of the most common medical problems in the outdoors. Although the internet is full of ways to treat a blister, the best treatment is prevention through properly fitted, clean, and broken-in gear. Some of the other commonly promoted “strategies” to prevent blisters include wearing two pairs of socks, wearing pantyhose under socks, lathering the feet with petroleum jelly, and covering areas with duct tape, as well as a variety of other methods.
There are a number of treatment methods too—some nonsensical and others painful. The best treatment of a small blister (e.g., less than 1–2 cm) that retains the outer skin flap (roof) is to sterilize a needle and use it to create pinholes in the lowest portion of the blister, which allows drainage by gravity. Relieving the pressure and maintaining the integrity of the roof will reduce pain and promote healing. For a blister that is open (unroofed), trim away the rough edges to prevent further irritation, and dress it with paper tape, Spenco 2nd Skin, or Compeed (a similar product but one that is easier to find in the UK/Europe). In the past, a “donut” made of moleskin has been recommended, but in many—if not most—cases, these are hard to fashion into an effective dressing and often result in inflammation or additional blisters along the periphery of the dressing (Hawkins, Simon et al. Reference Hawkins, Simon, Beissinger and Simon2017).
CONCLUSION
Myths and misunderstandings are abundant in wilderness medicine. Field archaeologists and others in outdoor field programs should look to evidence-based medicine as the most effective way to assess teachings for legitimacy. That filter can help programs weigh the many training opportunities and publications available within the growing fields of WEMS and wilderness medicine. Integration of evidence-based wilderness medicine practices will be an increasingly important part of risk management for fieldwork operations. Key tools to consider including in expedition medical kits are tourniquets, epinephrine, prescription nausea/vomiting medication, and wound cleaning and dressing materials. In addition, cognitive tools are an essential part of an evidence-based tool kit. These include training in the specific equipment described above as well as specialized training to appropriately manage specific conditions such as drowning, bites, and spinal injuries.
Acknowledgments
No permits were needed for this research.
Data Availability Statement
No specific archaeological data were generated or analyzed in the course of this research and manuscript preparation.