Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-20T00:04:57.919Z Has data issue: false hasContentIssue false

Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury

Published online by Cambridge University Press:  21 October 2016

Barbra Zupan*
Affiliation:
Department of Applied Linguistics, Brock University, St. Catharines, Ontario, Canada
Duncan Babbage
Affiliation:
Centre for Person Centred Research, Centre for eHealth, Auckland, Auckland University of Technology, New Zealand
Dawn Neumann
Affiliation:
Department of Physical Medicine and Rehabilitation, Indiana University School of Medicine, Rehabilitation Hospital of Indiana, United States
Barry Willer
Affiliation:
Department of Psychiatry, Jacobs School of Medicine and Biomedical Sciences, State University of New York at Buffalo, Buffalo, NY, United States
*
Address for correspondence: Barbra Zupan, Department of Applied Linguistics, Brock University, St. Catharines, Ontario, Canada. ORCID: 0000-0002-4603-333X. E-mail: [email protected].

Abstract

The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006.

Type
Articles
Copyright
Copyright © Australasian Society for the Study of Brain Impairment 2016 

Introduction

Our overall wellbeing is closely tied to our ability to determine how someone else is feeling (Milders, Fuchs, & Crawford, Reference Milders, Fuchs and Crawford2003; Nowicki & Duke, Reference Nowicki and Duke1994; Nowicki & Mitchell, Reference Nowicki and Mitchell1998; Phillips, Reference Phillips2003). To do this, we need to accurately interpret facial and vocal emotion expressions and use contextual cues within the situation to make inferences about the thoughts, feelings and intentions of others (Barrett, Mesquita, & Gendron, Reference Barrett, Mesquita and Gendron2011; Martin & McDonald, Reference Martin and McDonald2003; Rosip & Hall, Reference Rosip and Hall2004; Spell & Frank, Reference Spell and Frank2000; Turkstra, Reference Turkstra2008). People with traumatic brain injury (TBI) have been shown to have difficulty identifying emotions using facial, vocal and contextual cues (Babbage et al., Reference Babbage, Yim, Zupan, Neumann, Tomita and Willer2011; Bibby & McDonald, Reference Bibby and McDonald2005; Bornhofen & McDonald, Reference Bornhofen and McDonald2008; Ferstl, Rinck, & von Cramon, Reference Ferstl, Rinck and von Cramon2005; McDonald & Flanagan, Reference McDonald and Flanagan2004; Neumann et al., Reference Neumann, Zupan, Babbage, Radnovich, Tomita, Hammond and Willer2012; Neumann, Zupan, Malec, & Hammond, Reference Neumann, Zupan, Malec and Hammond2013; Radice-Neumann, Zupan, Babbage, & Willer, Reference Radice-Neumann, Zupan, Babbage and Willer2007; Zupan & Neumann, Reference Zupan and Neumann2014; Zupan, Babbage, Neumann, & Willer, Reference Zupan, Babbage, Neumann and Willer2014; Zupan, Neumann, Babbage, & Willer, Reference Zupan, Neumann, Babbage and Willer2009). To date, studies have not focused on whether these challenges are different for men and women. This is likely because incidence rates of TBI are at least twice as high for men (Colantonio et al., Reference Colantonio, Mar, Escobar, Yoshida, Velikonja, Rizoli and Cullen2010; Faul, Xu, & Wald, Reference Faul, Xu and Wald2010). However, women still comprise one-third of people with TBI (Colantonio et al., Reference Colantonio, Mar, Escobar, Yoshida, Velikonja, Rizoli and Cullen2010; Faul et al., Reference Faul, Xu and Wald2010; Nalder et al., Reference Nalder, Fleming, Cornwell, Foster, Skidmore, Bottari and Dawson2016); so, it is necessary to determine if sex-based differences exist.

Some studies show sex differences for emotion recognition and emotional inferencing skills in people without TBI. For instance, women have been found to have better facial emotion recognition skills than men (Collignon et al., Reference Collignon, Girard, Gosselin, Saint-Amour, Lepore and Lassonde2010; Hall & Matsumoto, Reference Hall and Matsumoto2004; Hampson, van Anders, & Mullin, Reference Hampson, van Anders and Mullin2006; Ietswaart, Milders, Crawford, Currie, & Scott, Reference Ietswaart, Milders, Crawford, Currie and Scott2008; Montagne, Kessels, Frigerio, de Haan, & Perrett, Reference Montagne, Kessels, Frigerio, de Haan and Perrett2005), particularly recognition of negative (Kessels, Montagne, Hendriks, Perrett, & de Haan, Reference Kessels, Montagne, Hendriks, Perrett and de Haan2014; Li, Yuan, & Lin, Reference Li, Yuan and Lin2008; Thayer & Johnson, Reference Thayer and Johnson2000) or subtle (Hoffmann, Kessler, Eppel, Rukavina, & Traue, Reference Hoffmann, Kessler, Eppel, Rukavina and Traue2010; Li et al., Reference Li, Yuan and Lin2008) emotion expressions. Research in vocal affect recognition reports a female advantage for the identification of the negative emotions, fear and sadness, as well as for vocal expressions of happiness (Bonebright, Thompson, & Leger, Reference Bonebright, Thompson and Leger1996; Collignon et al., Reference Collignon, Girard, Gosselin, Saint-Amour, Lepore and Lassonde2010; Ietswaart et al., Reference Ietswaart, Milders, Crawford, Currie and Scott2008; Schirmer, Striano, & Friederici, Reference Schirmer, Striano and Friederici2005). Women without TBI have also been found to be better at making inferences about the goals, intentions and desires of others (Krach et al., Reference Krach, Blumel, Marjoram, Lataster, Krabbendam, Weber and Kircher2009). Moreover, brain imaging studies indicate that emotional stimuli activate different neuronal structures in men and women (Campanella et al., Reference Campanella, Rossignol, Mejias, Joassin, Maurage, Debatisse and Guérit2004; Killgore, Oki, & Yurgelun-Todd, Reference Killgore, Oki and Yurgelun-Todd2001; Krach et al., Reference Krach, Blumel, Marjoram, Lataster, Krabbendam, Weber and Kircher2009; Lee et al., Reference Lee, Liu, Hoosain, Liao, Wu, Yuen and Gao2002; Li et al., Reference Li, Yuan and Lin2008; Schirmer & Kotz, Reference Schirmer and Kotz2003; Wildgruber, Pihan, Ackermann, Erb, & Grodd, Reference Wildgruber, Pihan, Ackermann, Erb and Grodd2002).

Given results of research for men and women without TBI, it is reasonable to expect that sex-based differences in emotion recognition and emotional inferencing skills also exist for men and women with TBI. Thus, the primary objective of this study is to compare performance of men and women with TBI for facial and vocal emotion recognitions and the use of contextual cues to infer how someone else is feeling (i.e., emotional inferencing). We hypothesised that women would more accurately recognise both facial and vocal emotion expressions than men and would also be better at making emotional inferences using contextual cues. We had two secondary objectives. The first was to determine whether the response accuracy of men and women for facial, vocal and contextual cues was contingent upon the specific emotion category. We hypothesised that women would be more accurate than men across all tasks for negatively valenced emotions, particularly fearful and sad expressions (Bonebright et al., Reference Bonebright, Thompson and Leger1996). The second was to compare recognition accuracy for high- and low-intensity facial and vocal emotion expressions. Based on research with people without TBI (Hoffmann et al., Reference Hoffmann, Kessler, Eppel, Rukavina and Traue2010), we hypothesised that women with TBI would be more accurate than men in their recognition of low-intensity facial and vocal emotion expressions but that there would be no difference between groups for high-intensity expressions.

Methods

Participants

Participants were recruited from outpatient brain-injury rehabilitation centres and local brain-injury support groups in . . . and screened as part of a larger study addressing treatment effects (Neumann, Babbage, Zupan, & Willer, Reference Neumann, Babbage, Zupan and Willer2015). The current study was approved by research ethics committees at each of the participating institutions, and informed consent was provided by participants prior to participation.

Participants ranged between 21 and 65 year old (mean = 41.15; SD = 12.18). All participants had sustained a severe TBI after the age of 18. Severity was determined by one of the following criteria: Glasgow Coma Scale at injury ≤ 8, post-traumatic amnesia ≥ 7 days or loss of consciousness ≥ 24 hours. All participants were at least 1-year post injury and demonstrated sufficient understanding of oral and written English during screening. Participants did not have any pre-morbid developmental (e.g., autism spectrum disorder) or acquired neurological disorder (e.g., stroke), major psychiatric disorder, substance dependence or uncorrected and impaired vision/hearing. Ultimately, a total of 160 participants with severe TBI were included in the current study, 44 (27%) of whom who were women. Table 1 provides further demographic information by sex. Males and females did not differ in age, level of education, age of injury or time since injury.

TABLE 1 Demographic Variables by Sex for Male and Female Participants

Measures and Procedures

This study includes a subset of measures administered to participants as part of the larger randomised clinical trial (Neumann et al., Reference Neumann, Babbage, Zupan and Willer2015). As part of that study, participants were administered a wide battery of tests to evaluate emotion recognition, emotional inferencing, empathy, cognition, mood, community integration, relationship support and olfactory sensitivity. Only the two measures (three total tasks) relevant to the current study are discussed here, all of which were administered in person via computer. Participants were seen individually and offered a break as needed.

Diagnostic assessment of nonverbal accuracy 2 (DANVA-2) – adult faces (DANVA-Faces) and adult paralanguage (DANVA-Voices) subtests (Nowicki, Reference Nowicki2008)

The DANVA-2 is a standardised assessment with age-related norms collected from a healthy population of children and adults between the ages of 3 and 99. Information regarding additional demographics (e.g., sex, level of education and ethnicity) of the normative sample is not available for the DANVA-2. However, both the Adult-Faces and Adult-Voices subtests have normative scores (means and standard deviations) available by specific age-groups and have been shown to have good internal consistency and high test-retest reliability and correlate well with measures assessing similar constructs, such as social competence (Nowicki, Reference Nowicki2008; Nowicki & Carton, Reference Nowicki and Carton1993; Nowicki & Duke, Reference Nowicki and Duke1994). Both subtests have also been used previously with people with TBI (Neumann et al., Reference Neumann, Zupan, Babbage, Radnovich, Tomita, Hammond and Willer2012; Reference Neumann, Zupan, Malec and Hammond2013; Reference Neumann, Babbage, Zupan and Willer2015; Spell & Frank, Reference Spell and Frank2000; Zupan & Neumann, Reference Zupan and Neumann2014). Each subtest includes 24 stimuli that equally represent four emotion categories (happy, sad, angry and fearful). Both also include an equal number of high- and low-intensity expressions.

The DANVA-Faces subtest consists of coloured photographs depicting young adults portraying emotional facial expressions. Participants viewed each photograph via computer screen for 15 seconds and selected the emotion portrayed from a set of four choices (happy, sad, angry and fearful). Each of the 24 faces was presented for 15 seconds in contrast to the 2 seconds specified in the standard procedure. This was done to ensure that responses were not affected by speed of processing difficulties.

The DANVA-Voices includes 24 repetitions of the sentence, ‘I'm going out of the room now, and I'll be back later’ spoken in either a happy, sad, angry or fearful tone of voice. Participants heard each sentence only one time and selected the emotion expressed from the same set of four choices used for the DANVA-Faces.

Emotional inference from stories test (EIST) (Zupan, Neumann, Babbage, & Willer, Reference Zupan, Neumann, Babbage and Willer2015)

The emotional inference from stories test (EIST) measures participants’ ability to infer emotions using written contextual information. The EIST has been validated with healthy adults (age 17–44) and found to be sensitive to deficits in emotional inferencing in people with TBI (Neumann et al., Reference Neumann, Babbage, Zupan and Willer2015; Zupan et al., Reference Zupan, Neumann, Babbage and Willer2015).

Participants received either Version 1 or Version 2 of the EIST (see Zupan et al. (Reference Zupan, Neumann, Babbage and Willer2015) for a description of each version). Each version contained 12 stories that were presented via computer and accompanied by audio readings of the text in a neutral tone of voice. Following each story, participants were presented with a follow-up question that asked how a character in the story was feeling from a list – happy, sad, angry or fearful. To accurately interpret the character's emotions, participants needed to integrate contextual cues (e.g., situation and event) with the character's wants, beliefs and expectations, including the character's response to that situation or event. Participants were not able to refer back to the story to respond to the question. Test scores ranged from 0 to12.

Statistical Analyses

To address the primary and secondary objectives, our study used a repeated measures analysis of variance for each main question that addressed the primary and secondary objectives. The dependent variables were response accuracy (mean per cent correctly identified) for the task as a whole (DANVA-Faces, DANVA-Voices and EIST), response accuracy for the target emotion categories (happy, sad, angry and fearful) within each task and response accuracy for intensity (high and low) for the DANVA-Faces and DANVA-Voices tasks. Since normative data for the DANVA-Faces and DANVA-Voices tasks are different by age-group, z-scores were used when the overall test score was included in analyses. Raw scores for the DANVA-Faces and DANVA-Voices were used for analyses to address the secondary objectives, because the DANVA does not provide normative data by emotion and intensity, only total score. Raw scores were also used for secondary analyses of performance on the EIST. Since our primary objective was to explore sex differences in how participants responded to the three main tasks, all results associated with this aim were reported as significant at p < 0.05. Where Maulchy's test indicated that the assumption of sphericity had not been met; degrees of freedom were corrected using the Greenhouse–Geisser estimates of sphericity. Statistical analyses were conducted using SPSS v.23.0.0.0.

Results

Primary Objective

Are there sex differences in emotion recognition and emotional inferencing following TBI?

A 2 (sex) × 3 (task) mixed design ANOVA examining sex differences across the three tasks found a significant main effect of both sex, F(1, 157) = 6.04, p = 0.015, ηp 2 = 0.37 and task, F(2, 314) = 105.78, p < 0.001, ηp 2 = 0.40. Accuracy for each task by men and women is shown in Figure 1. The absolute level of accuracy is varied by task. No significant interaction was observed between sex and task, F(2, 314) = 1.61, p = 0.202, ηp 2 = 0.10. See Figure 1.

FIGURE 1 Mean response accuracy for men and women by emotion recognition task.

Further analyses were conducted comparing scores of participants in the current study to age-adjusted normative scores. Impairment was indicated if participant scores for the subtest were 2 or more standard deviations below the normative age-group mean. Of the 116 males who participated in the study, 49 of them (42%) were impaired at facial affect recognition. 15 of the 44 females (32%) were found to be impaired. This difference was not significant, χ2 = 1.45, p = 0.228. Using this same method of impairment classification, significantly more men were impaired at recognising emotion in voices (n = 48, 41%) than women (n = 8, 19%, χ2 = 7.13, p = 0.008). A comparison of scores on the EIST to a normative sample (Zupan et al., Reference Zupan, Neumann, Babbage and Willer2015) also indicated that significantly more men were impaired (n = 106, 91%) than women (n = 33, 75%, χ2 = 7.51, p = 0.006). Participants were again classified as being impaired if scores were 2 or more standard deviations below the normative sample mean.

Secondary Objectives

Is response accuracy of men and women for facial, vocal and contextual cues contingent upon the specific emotion category?

The first of our secondary objectives was to explore the impact of emotion category on response accuracy by men and women to each of the three tasks. To explore performance on the DANVA-Faces, a 2 (sex) × 4 (emotion category) mixed design ANOVA was conducted. A significant main effect was observed for emotion category, F(2.69, 422.19) = 80.80, p < 0.001, ηp 2 = 0.34. No statistically significant interaction was found between sex and emotion category on this measure, F(2.69, 422.19) = 1.38, p = 0.249, ηp 2 = 0.009. Mean response accuracy by emotion category for this measure is displayed in the first graph in Figure 2. Although a visual inspection of those means and confidence intervals suggested a possible sex effect, no statistically significant difference was found, F(1, 157) = 3.28, p = 0.07, ηp 2 = 0.02. To further explore the main effect of emotion, follow-up one-way ANOVAs were conducted to look at how men and women responded to each of the four categories of emotion. Although performance for happy, sad and angry faces was similar, women were significantly better at recognising fearful facial expressions than men, F(1, 158) = 5.512, p = 0.02.

FIGURE 2 Mean response accuracy for males and females by emotion category and task.

Figure 2 also displays responses by males and females to each emotion category within the DANVA-Voices task. Using a 2 (sex) × 4 (emotion) mixed design ANOVA, women were found to perform on average significantly better than men, F(1, 157) = 5.41, p = 0.021, ηp 2 = 0.03. Response accuracy on this task was also found to significantly depend upon emotion category, F(2.86, 449.49) = 13.87, p < 0.001, ηp 2 = 0.08. As previously, no statistically significant interaction was found between sex and emotion category on this measure, F(2.87, 449.49) = 1.88, p = 0.13, ηp 2 = 0.01. Follow-up one-way ANOVAs showed that women recognised sad vocal expressions significantly better than men, F(1, 157) = 11.26, p = 0.001.

Finally, a 2 (sex) × 4 (emotion category) mixed design ANOVA was also conducted to investigate the impact of emotion category on the ability of men and women to use contextual cues to infer others’ emotions on the EIST. Similar to the DANVA-Faces and DANVA-Voices tasks, results showed significant main effects for both sex, F(1, 158) = 8.11, p = 0.005, ηp 2 = 0.05, and emotion category, F(2.70, 432.91) = 4.253, p = 0.006, ηp 2 = 0.03, whereas no statistically significant interaction was found between sex and emotion category on this measure, F(2.70, 432.91) = 0.373, p = 0.753, ηp 2 = 0.002. Although women scored higher for all four emotions targeted in the EIST (see Figure 2), this difference was only significant for stories in which the characters were feeling sad, F(1, 158) = 5.48, p = 0.02 or fearful, F(1, 158) = 3.64, p = 0.05.

Did men and women respond differently to high- and low-intensity facial and vocal emotion expressions?

Only the DANVA-Faces and DANVA-Voices tasks have stimuli identified as high and low in intensity. For both tasks, a 2 (sex) × 2 (intensity) mixed design ANOVA was conducted to evaluate the influence of stimulus intensity on responses by males and females. For the DANVA-Faces task, we observed a significant main effect for intensity, F(1, 157) = 206.12, p < 0.001, ηp 2 = 0.57 − participants were less accurate in identifying low-intensity facial emotion expressions (see the left half of Figure 3). A significant main effect of sex was not observed, F(1, 157) = 3.28, p = 0.072, ηp 2 = 0.02. To further explore the main effect of intensity, a follow-up one-way ANOVA was conducted to look at how men and women responded to high- and low-intensity expressions. We observed a significant sex difference for high-intensity expressions, F(1, 157) = 4.92, p = 0.028, female participants were more accurate in identifying high-intensity facial emotion expressions (see the left half of Figure 3).

FIGURE 3 Mean response accuracy by men and women for high- and low-intensity stimuli.

The 2 (sex) × 2 (intensity) mixed design ANOVA for the DANVA-Voices task showed a significant main effect for both intensities, F(1, 157) = 17.84, p < 0.001, ηp 2 = 0.10 and sex, F(1, 157) = 5.414, p = 0.021, ηp 2 = 0.03. No significant interaction effect was found, F(1, 157) = 2.573, p = 0.111, ηp 2 = 0.02. Although women were better able to identify both high- and low-intensity expressions than men (see right half of Figure 3), this difference was only significant for low-intensity vocal emotion expressions, F(1,157) = 6.99, p = 0.009.

Discussion

The primary objective of the current study was to compare how accurately men and women with TBI recognise facial and vocal emotion expressions of emotion, and how well they use contextual cues to infer how another person is feeling. We hypothesised that women would score significantly higher than men for all three tasks. Our hypothesis was partially supported. Women with TBI in the current study scored significantly higher than men on the, DANVA-Voices and EIST tasks. It appears that the female advantage often found in healthy controls is at least partially maintained following a TBI (Collignon et al., Reference Collignon, Girard, Gosselin, Saint-Amour, Lepore and Lassonde2010; Hall & Matsumoto, Reference Hall and Matsumoto2004; Kessels et al., Reference Kessels, Montagne, Hendriks, Perrett and de Haan2014; Krach et al., Reference Krach, Blumel, Marjoram, Lataster, Krabbendam, Weber and Kircher2009; Montagne et al., Reference Montagne, Kessels, Frigerio, de Haan and Perrett2005; Rosip & Hall, Reference Rosip and Hall2004; Schirmer, Kotz, & Friederici, Reference Schirmer, Kotz and Friederici2002; Thayer & Johnson, Reference Thayer and Johnson2000; Thompson & Voyer, Reference Thompson and Voyer2014).

Studies with healthy adults have reported that women are better able to identify emotions in others when only facial cues are available (Collignon et al., Reference Collignon, Girard, Gosselin, Saint-Amour, Lepore and Lassonde2010; Hall & Matsumoto, Reference Hall and Matsumoto2004; Montagne et al., Reference Montagne, Kessels, Frigerio, de Haan and Perrett2005). Recent work also showed superior performance for facial affect recognition by women with TBI when compared to men with TBI (Rigon, Turkstra, Mutlu, & Duff, Reference Rigon, Turkstra, Mutlu and Duff2016). Our findings did not support this. It is possible that our findings differ from Rigon et al. (Reference Rigon, Turkstra, Mutlu and Duff2016) because of our use of static stimuli and their use of dynamic stimuli. Interpreting emotion in everyday interactions requires us to decode moving and continually changing facial features so our visual systems are primed for this type of stimuli (Cunningham & Wallraven, Reference Cunningham and Wallraven2009). Furthermore, static and dynamic facial expressions of the emotion are processed in different areas of the brain, with greater brain responses to dynamic stimuli (Adolphs, Tranel, & Damasio, Reference Adolphs, Tranel and Damasio2003; LaBar, Crupain, Voyvodic, & McCarthy, Reference LaBar, Crupain, Voyvodic and McCarthy2003; Collignon et al., Reference Collignon, Girard, Gosselin, Roy, Saint-Amour, Lassonde and Lepore2008; Schulz & Pilz, Reference Schulz and Pilz2009). Alternatively, the lack of a significant sex difference on the DANVA-Faces task may have been due to the amount of time participants were given to view each stimulus. Previous research has indicated that superior facial emotion recognition for women only occurs under limited exposure times, particularly when exposure is less than 10 seconds (Hampson et al., Reference Hampson, van Anders and Mullin2006). Participants in the current study were given 15 seconds per facial stimulus compared to the standard 2 seconds given in the DANVA-Faces protocol. Healthy women have been reported to have faster speed of processing than men (Camarata & Woodcock, Reference Camarata and Woodcock2006), and in a subset of the current sample, we previously concluded that information processing speed was one of a number of factors that was related to facial affect recognition performance (Yim, Babbage, Zupan, Neumann, & Willer, Reference Yim, Babbage, Zupan, Neumann and Willer2013). Thus, although we felt it important to increase exposure time to minimise the impact of potential speed of processing difficulties on performance, it is possible that in doing so, we eliminated expected sex differences in performance.

Our women with TBI were better than men at identifying emotion using only vocal cues, a finding similar to the sex differences reported for healthy adults (Bonebright et al., Reference Bonebright, Thompson and Leger1996; Collignon et al., Reference Collignon, Girard, Gosselin, Saint-Amour, Lepore and Lassonde2010; Schirmer et al., Reference Schirmer, Striano and Friederici2005). In addition, a significantly smaller proportion of women (19%) were found to have vocal emotion recognition impairment than men (41%). Thus, it appears that in women, vocal affect recognition impairment is less common than facial affect recognition deficits (19% and 32%, respectively). This was not expected, given that recognising emotion using only vocal cues is typically more challenging than using only facial cues (Johnstone & Scherer, Reference Johnstone and Scherer2000; Russell, Bachorowski, & Fernandez-Dols, Reference Russell, Bachorowski and Fernandez-Dols2003; Walbott & Scherer, Reference Walbott and Scherer1986). In contrast, a similar proportion of men were found to be impaired for voices (41%) and faces (42%). Taken together, these results raise the possibility that the recognition of vocal emotion expressions may be less vulnerable to injury in women following TBI. Research with healthy men and women has shown that during speech perception, women are influenced by the vocal affect within the message much earlier than men (Schirmer & Kotz, Reference Schirmer and Kotz2003; Schirmer et al., Reference Schirmer, Kotz and Friederici2002). If women do in fact access the emotional information within a message earlier than men, the additional time may allow them opportunity to make more accurate judgments about the emotional tone of voice, particularly, if speed of processing is compromised as it often is after TBI.

Similar to previous work that has indicated that people with TBI have difficulty making inferences, emotional or otherwise (Bibby & McDonald, Reference Bibby and McDonald2005; Milders, Ietswaart, Crawford, & Currie, Reference Milders, Ietswaart, Crawford and Currie2006), a large proportion of both men (91%) and women (75%) in our sample were found to be impaired in their ability to use contextual cues to make inferences about how someone is feeling. However, women with TBI were more accurate than men in this ability, supporting work with healthy women and men in tasks that involve identifying the thoughts and emotions of others (Baron-Cohen, Richler, Bisarya, Gurunathan, & Wheelwright, Reference Baron-Cohen, Richler, Bisarya, Gurunathan and Wheelwright2003). Our findings also support Turkstra's (Reference Turkstra2008) study examining social inferencing abilities of adult men and women with TBI in a theory of mind task. It is important to point out that the EIST is not a theory of mind task per se but instead a language-based task that requires people to integrate available contextual and situational cues to make an inference about how someone is feeling. Since women with TBI have been shown to outperform men on language and working memory tasks (Ratcliff et al., Reference Ratcliff, Greenspan, Goldstein, Stringer, Bushnik, Hammond and Wright2007), it is possible that our results reflect differences in language and working memory skills and superior emotional inferencing abilities for women with TBI. We did not conduct language or working memory assessments to rule out this possibility.

One of the secondary objectives of the current study was to explore if men and women responded differently to stimuli representing different emotion categories. Although women scored higher than men for all three tasks, across all four emotions, this difference was not always statistically significant. Overall scores for men and women with TBI did not differ on the DANVA-Faces task. Examination of mean responses (see Figure 2) shows nearly identical responding to happy facial expressions by these two groups. This observation is not surprising given that happy was the only positive emotion included in the current study and also the emotion most easily recognisable in the face (Adolphs, Reference Adolphs2002; Keltner & Ekman, Reference Keltner, Ekman, Lewis and Haviland-Jones2000). Fearful was the most challenging emotion to identify by both groups. This pattern of recognition matches that reported by Rosenberg et al. (Reference Rosenberg, McDonald, Dethier, Kessels and Westbrook2014) for both people with and without TBI.

Although we found fearful to be the most difficult emotion to identify in the face overall, women were better able to recognise this emotion than men. This observed superior performance supports research with healthy adults that reports that women recognise negative facial emotion expressions better than men (Hampson et al., Reference Hampson, van Anders and Mullin2006; Kessels et al., Reference Kessels, Montagne, Hendriks, Perrett and de Haan2014; Li et al., Reference Li, Yuan and Lin2008; Thayer & Johnson, Reference Thayer and Johnson2000).

For the DANVA-Voices task, we had hypothesised that women would show superior performance compared to men in their ability to identify vocal emotion expressions, particularly negatively valenced expressions. We observed the expected pattern − women with TBI were more accurate in their recognition of vocal emotion expressions overall, and significantly fewer of them were impaired compared to men. Response accuracy was also found to be dependent upon the emotion category in this task, suggesting either that some emotions are easier to recognise than others, or that the item difficulty across emotion categories is not equivalent on this measure.

The ability of men and women to use contextual cues to make emotional inferences has not been well studied, even in healthy populations. However, since women are generally thought to have stronger social–emotional skills than males and better recognition of negatively valenced emotions, we predicted that they would show superior performance for sad, angry and fearful stories. A large proportion of both women and men were found to be impaired on this task. Alongside this, our hypothesis regarding a sex difference was again largely supported, although we found a general main effect for this sex difference. That is, the sex difference was not restricted to our three negatively valenced emotions – women also performed better than men at using contextual cues to recognise happy stimuli.

The final objective of the current study was to explore sex differences in response to high- and low-intensity stimuli on the DANVA-Faces and DANVA-Voices tasks. Low-intensity stimuli present subtler cues and thus it would be expected that correct identification would be more difficult, as observed. Studies with healthy men and women have shown that though women are better able to identify low-intensity facial emotion expressions than men, the advantage disappears for high-intensity expressions (Hoffmann et al., Reference Hoffmann, Kessler, Eppel, Rukavina and Traue2010; Montagne et al., Reference Montagne, Kessels, Frigerio, de Haan and Perrett2005). Our results showed the opposite effect – women with TBI were significantly more accurate than men in identifying high-intensity facial emotion expressions. This result supports work by Rosenberg et al. (Reference Rosenberg, McDonald, Dethier, Kessels and Westbrook2014), who found a facilitative effect for the recognition of facial expressions as the intensity of the stimulus increased. However, our results conflict with those of Spell and Frank (Reference Spell and Frank2000), who found that compared to healthy controls, people with TBI have significantly more difficulty identifying high-, but not low, intensity facial emotion expressions. It may be that only men differ from controls in the recognition of high-intensity expressions. Since Rosenberg et al. (Reference Rosenberg, McDonald, Dethier, Kessels and Westbrook2014) suggested that intensity cues may more greatly influence the recognition of facial emotion expressions than valence and/or specific emotion categories, this needs to be further explored in future studies that include men and women with and without TBI and a large number of high- and low-intensity stimuli.

Responses to the DANVA-Voices suggested that women were more effective than men at using information from low-intensity stimuli when interpreting paralinguistic cues of emotion. Women's ability to make use of low-intensity information is reflected in their more accurate recognition of sad vocal expressions – expressions that are characteristically low in intensity (Zupan et al., Reference Zupan, Neumann, Babbage and Willer2009). This finding further supports Rosenberg et al. (Reference Rosenberg, McDonald, Dethier, Kessels and Westbrook2014)’s suggestion that intensity may be the primary cue influencing emotion perception.

Study Limitations

This study has several limitations that should be noted. First, we increased the presentation time for the stimuli in the DANVA-Faces task from its standardised 2 seconds to 15 seconds. We did this to decrease the impact of potential speed of processing difficulties on performance, difficulties that are common following TBI. In doing so, we intentionally deviated from the procedure used in the normative study reported in the DANVA manual, making this comparison less direct. We may also have inadvertently altered the findings given previous research that response time can mediate observed sex differences. Additionally, although the DANVA-Faces has been shown to have good reliability and validity and has been used previously with people with TBI, the stimuli are not representative of the dynamic facial expressions people encounter in everyday life, limiting generalisation of results. Future studies investigating sex differences in facial emotion recognition following TBI should include dynamic facial expressions, which would provide a more natural context for selecting the time that an emotion is presented.

Although the purpose of the current study was not to compare the performance of men and women with TBI to healthy men and women, the lack of a matched control group might still be viewed as a limitation. Although the DANVA-Faces and DANVA-Voices both include age-adjusted norms, neither of the three tasks included norms for healthy men and women, nor did they include additional demographic information (e.g., level of education). Future studies exploring emotion recognition and emotional inferencing abilities of men and women with TBI and the potential factors that influence these skills (e.g., emotion category and intensity) should include a healthy control group closely matched for age, sex and education.

All measures administered in the current study used a four-choice response format, with one positive alternative (happy) and three negative ones (sad, angry and fearful). This may have resulted in artificial accuracy of responses if participants were using exclusion rules in response selection. Future studies should consider using a modified forced-choice format that additionally gives participants the option of responding ‘I don't know’, a format shown to reduce artificially forced agreement (Frank & Stennett, Reference Frank and Stennett2001).

The current study was also limited in its use of only four emotion categories. Although these four emotions have been widely examined in similar studies, future studies should use stimuli that include a greater number of overall emotions and a more equal number of positive and negative emotions. This would allow for better analysis of how men and women respond to specific emotion categories, and whether women do in fact have superior recognition of negatively valenced emotions, or if they are just better at differentiating between similar cues.

In addition to including a greater number of emotion categories, future studies of this type should also include a larger number of high- and low-intensity exemplars within each emotion category. Our results suggest that cues of intensity may have been a key contributor to the sex differences that were observed. However, the limited number of high- and low-intensity expressions with each emotion category on the DANVA did not allow for direct analysis of this hypothesis.

Finally, the current study did not include a language or working memory measure. Inclusions of these measures would have allowed us to determine if sex differences found on the EIST, a language-based task, were related to differences in language or working memory capabilities and more confidently conclude that results of the EIST task reflect sex differences in emotional inferencing abilities.

Conclusions

Studies investigating sex differences following TBI are limited. Results of the current study indicated that even after a TBI, women have an advantage over males in the recognition of vocal emotion expressions and emotional inferencing skills. However, despite scoring higher on average than males, a significant proportion of women were still found to be impaired when compared to the normative means for each task. In other words, although women with TBI may do better than men overall, the proportion of women found to be impaired for facial affect recognition (19%), vocal affect recognition (32%) and emotional inferencing (75%) was not marginal, and women would benefit as much as men from remediation of these difficulties. Thus, these skills should be routinely evaluated in both sexes following TBI, and treated clinically, as needed, as part of the patient's rehabilitation programme. Similar to Rigon et al. (Reference Rigon, Turkstra, Mutlu and Duff2016), our findings also highlight the importance of including sex as a factor in studies evaluating emotion recognition and emotional inferencing. It is acknowledged that the stimuli used in this study do not offer an ideal picture of how well participants with TBI recognise emotions in everyday life; thus future studies should endeavour to evaluate emotion recognition using more ecologically valid methods (e.g., dynamic visual stimuli and combined nonverbal cues). Such methods would offer us more insight into sex differences in emotion recognition and emotional inferencing following TBI.

Financial Support

This work was supported by the National Institute on Disability and Rehabilitation Research (grant no. H133G080043).

Conflict of Interest

None.

Ethical Standards

The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helinski Declaration of 1975, as revised in 2008.

References

Adolphs, R. (2002). Recognizing emotion from facial expressions: Psychological and neurological mechanisms. Behavioral and Cognitive Neuroscience Reviews, 1, 2162.Google Scholar
Adolphs, R., Tranel, D., & Damasio, A.R. (2003). Dissociable neural systems for recognizing emotions. Brain and Cognition, 52 (1), 6169. Retrieved from http://doi.org/10.1016/S0278-2626(03)00009-5 CrossRefGoogle ScholarPubMed
Babbage, D.R., Yim, J., Zupan, B., Neumann, D., Tomita, M.R., & Willer, B. (2011). Meta-analysis of facial affect recognition difficulties after traumatic brain injury. Neuropsychology, 25 (3), 277285. Retrieved from http://doi.org/10.1037/a0021908 Google Scholar
Baron-Cohen, S., Richler, J., Bisarya, D., Gurunathan, N., & Wheelwright, S. (2003). The systemizing quotient: An investigation of adults with Asperger syndrome or high-functioning autims, and normal sex differences. Philosophical Transactions of the Royal Society, 358, 361374. Retrieved from http://doi.org/10.1098/rstb.2002.1206 Google Scholar
Barrett, L.F., Mesquita, B., & Gendron, M. (2011). Context in Emotion Perception. Current Directions in Psychological Science, 20 (5), 286290. Retrieved from http://doi.org/10.1177/0963721411422522 CrossRefGoogle Scholar
Bibby, H., & McDonald, S. (2005). Theory of mind after traumatic brain injury. Neuropsychologia, 43 (1), 99114. Retrieved from http://doi.org/10.1016/j.neuropsychologia.2004.04.027 Google Scholar
Bonebright, T.L., Thompson, J.L., & Leger, D.W. (1996). Gender stereotypes in the expression and perception of vocal affect. Sex Roles, 34 (5/6), 429445.CrossRefGoogle Scholar
Bornhofen, C., & McDonald, S. (2008). Emotion perception deficits following traumatic brain injury: A review of the evidence and rationale for intervention. Journal of the International Neuropsychological Society: JINS, 14 (4), 511525. Retrieved from http://doi.org/10.1017/S1355617708080703 Google Scholar
Camarata, S., & Woodcock, R. (2006). Sex differences in processing speed: Developmental effects in males and females. Intelligence, 34, 231252. Retrieved from http://doi.org/10.1016/j.intell.2005.12.001 CrossRefGoogle Scholar
Campanella, S., Rossignol, M., Mejias, S., Joassin, F., Maurage, P., Debatisse, D., . . . Guérit, J.M. (2004). Human gender differences in an emotional visual oddball task: An event-related potentials study. Neuroscience Letters, 367, 1418. Retrieved from http://doi.org/10.1016/j.neulet.2004.05.097 CrossRefGoogle Scholar
Colantonio, A., Mar, W., Escobar, M., Yoshida, K., Velikonja, D., Rizoli, S., . . . Cullen, N. (2010). Women's health outcomes after traumatic brain injury. Journal of Women's Health, 19 (6), 11091116.Google Scholar
Collignon, O., Girard, S., Gosselin, F., Roy, S., Saint-Amour, D., Lassonde, M., & Lepore, F. (2008). Audio-visual integration of emotion expression. Brain Research, 1242, 126135. Retrieved from http://doi.org/10.1016/j.brainres.2008.04.023 Google Scholar
Collignon, O., Girard, S., Gosselin, F., Saint-Amour, D., Lepore, F., & Lassonde, M. (2010). Women process multisensory emotion expressions more efficiently than men. Neuropsychologia, 48 (1), 220225. Retrieved from http://doi.org/10.1016/j.neuropsychologia.2009.09.007 Google Scholar
Cunningham, D.W., & Wallraven, C. (2009). Dynamic information for the recognition of conversational expressions. Journal of Vision, 9 (13), 117. Retrieved from http://doi.org/10.1167/9.13.7.Introduction Google Scholar
Faul, M., Xu, L., & Wald, M. (2010). Traumatic brain injury in the United States: Emergency department visits, hospitalizations and deaths 2002-2006. Atlanta: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control.Google Scholar
Ferstl, E.C., Rinck, M., & von Cramon, D.Y. (2005). Emotional and temporal aspects of situation model processing during text comprehension: An event-related fMRI study. Journal of Cognitive Neuroscience, 17 (5), 724739. Retrieved from http://doi.org/10.1162/0898929053747658 Google Scholar
Frank, M.G., & Stennett, J. (2001). The forced-choice paradigm and the perception of facial expressions of emotion. Journal of Personality and Social Psychology, 80, 7585.Google Scholar
Hall, J.A., & Matsumoto, D. (2004). Gender differences in judgments of multiple emotions from facial expressions. Emotion, 4 (2), 201206. Retrieved from http://doi.org/10.1037/1528-3542.4.2.201 Google Scholar
Hampson, E., van Anders, S.M., & Mullin, L.I. (2006). A female advantage in the recognition of emotional facial expressions: Test of an evolutionary hypothesis. Evolution and Human Behavior, 27, 407416. Retrieved from http://doi.org/10.1016/j.evolhumbehav.2006.05.002 Google Scholar
Hoffmann, H., Kessler, H., Eppel, T., Rukavina, S., & Traue, H.C. (2010). Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men. Acta Psychologica, 135, 278283.CrossRefGoogle ScholarPubMed
Ietswaart, M., Milders, M., Crawford, J.R., Currie, D., & Scott, C.L. (2008). Longitudinal aspects of emotion recognition in patients with traumatic brain injury. Neuropsychologia, 46 (1), 148159. Retrieved from http://doi.org/10.1016/j.neuropsychologia.2007.08.002 Google Scholar
Johnstone, T., & Scherer, K.R. (2000). Vocal communication of emotion. In M. Lewis & J. Haviland (Eds.), The Handbook of Emotion (pp. 220–235) New York: Guilford.Google Scholar
Keltner, D., & Ekman, P. (2000). Facial expression of emotion. In Lewis, M. & Haviland-Jones, J.M. (Eds.), Handbook of emotions. New York: Guilford Press.Google Scholar
Kessels, R.P., Montagne, B., Hendriks, A.W., Perrett, D.I., & de Haan, E.H. (2014). Assessment of perception of morphed facial expressions using the emotion recognition task: Normative data from healthy participants aged 8-75. Journal of Neuropsychology, 8, 7593.CrossRefGoogle ScholarPubMed
Killgore, W.D.S., Oki, M., & Yurgelun-Todd, D.A. (2001). Sex-specific developmental changes in amygdala responses to affective faces. Brain Imaging, 12 (2), 427433.Google Scholar
Krach, S., Blumel, I., Marjoram, D., Lataster, T., Krabbendam, L., Weber, J., . . . Kircher, T. (2009). Are women better mind readers? Sex differences in neural correlates of mentalizing detected with functional MRI. BMC Neuroscience, 10 (9).Google Scholar
LaBar, K.S., Crupain, M.J., Voyvodic, J.T., & McCarthy, G. (2003). Dynamic perception of facial affect and identity in the human brain. Cerebral Cortex (New York, N.Y.: 1991), 13 (10), 10231033. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12967919 CrossRefGoogle ScholarPubMed
Lee, T.M.., Liu, H.-L., Hoosain, R., Liao, W.-T., Wu, C.-T., Yuen, K.S., . . . Gao, J.-H. (2002). Gender differences in neural correlates of recognition of happy and sad faces in humans assessed by functional magnetic resonance imaging. Neuroscience Letters, 333, 1316.Google Scholar
Li, H., Yuan, J., & Lin, C. (2008). The neural mechanism underlying the female advantage in identifying negative emotions: An event-related potential study. NeuroImage, 40 (4), 19211929.CrossRefGoogle ScholarPubMed
Martin, I., & McDonald, S. (2003). Weak coherence, no theory of mind, or executive dysfunction? Solving the puzzle of pragmatic language disorders. Brain and Language, 85 (3), 451466. Retrieved from http://doi.org/10.1016/S0093-934X(03)00070-1 Google Scholar
McDonald, S., & Flanagan, S. (2004). Social perception deficits after traumatic brain injury: Interaction between emotion recognition, mentalizing ability, and social communication. Neuropsychology, 18 (3), 572579. Retrieved from http://doi.org/10.1037/0894-4105.18.3.572 CrossRefGoogle ScholarPubMed
Milders, M., Fuchs, S., & Crawford, J.R. (2003). Neuropsychological impairments and changes in emotional and social behaviour following severe traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 25 (2), 157172. Retrieved from http://doi.org/10.1076/jcen.25.2.157.13642 CrossRefGoogle ScholarPubMed
Milders, M., Ietswaart, M., Crawford, J.R., & Currie, D. (2006). Impairments in theory of mind shortly after traumatic brain injury and at 1-year follow-up. Neuropsychology, 20 (4), 400408. Retrieved from http://doi.org/10.1037/0894-4105.20.4.400 CrossRefGoogle ScholarPubMed
Montagne, B., Kessels, R.P., Frigerio, E., de Haan, E.H., & Perrett, D.I. (2005). Sex differences in the perception of affective facial expressions: Do men really lack emotional sensitivity? Cognitive Processing, 6 (2), 136141.Google Scholar
Nalder, E., Fleming, J., Cornwell, P., Foster, M., Skidmore, E., Bottari, C., & Dawson, D. (2016). Sentinel events during the transition from hospital to home: A longitudinal study of women with traumatic brain injury. Archives of Physical Medicine and Rehabilitation, 2 (Suppl. 1), 546553.Google Scholar
Neumann, D., Babbage, D., Zupan, B., & Willer, B. (2015). A randomized controlled trial of emotion recognition training after traumatic brain injury. Journal of Head Trauma Rehabilitation, 30 (3), E12–E23. Retrieved from http://doi.org/0.1097/HTR.0000000000000054 Google Scholar
Neumann, D., Zupan, B., Babbage, D.R., Radnovich, A.J., Tomita, M., Hammond, F., & Willer, B. (2012). Affect recognition, empathy, and dysosmia after traumatic brain injury. Archives of Physical Medicine and Rehabilitation, 93 (8), 14141420. Retrieved from http://doi.org/10.1016/j.apmr.2012.03.009 CrossRefGoogle ScholarPubMed
Neumann, D., Zupan, B., Malec, J.F., & Hammond, F.M. (2013). Relationships between alexithymia, affect recognition, and empathy after traumatic brain injury. Journal of Head Trauma Rehabilitation, 29 (1), E18–E27.Google Scholar
Nowicki, S. (2008). The manual for the receptive tests of the diagnostic analysis of nonverbal accuracy 2 (DANVA2). Atlanta, GA: Department of Psychology, Emory University.Google Scholar
Nowicki, S Jr., & Carton, J. (1993). The measurement of emotional intensity from facial expressions. The Journal of Social Psychology, 133 (5), 749750.Google Scholar
Nowicki, S Jr., & Duke, M.P. (1994). The association of children's nonverbal decoding abilities with their popularity, locus of control, and academic achievement. The Journal of Genetic Psychology, 153 (4), 385393.Google Scholar
Nowicki, S., & Mitchell, J. (1998). Accuracy in identifying affect in child and adult faces and voices and social competence in preschool children. Genetic, Social & General Psychology, 124 (1), 3959.Google ScholarPubMed
Phillips, M. (2003). Understanding the neurobiology of emotion perception: Implications for psychiatry. The British Journal of Psychiatry, 182, 190192.Google Scholar
Radice-Neumann, D., Zupan, B., Babbage, D.R., & Willer, B. (2007). Overview of impaired facial affect recognition in persons with traumatic brain injury. Brain Injury BI, 21 (8), 807816. Retrieved from http://doi.org/10.1080/02699050701504281 Google Scholar
Ratcliff, J.J., Greenspan, A.I., Goldstein, F.C., Stringer, A.Y., Bushnik, T., Hammond, F.M., . . . Wright, D.W. (2007). Gender and traumatic brain injury: Do the sexes fare differently? Brain Injury, 21 (10), 10231030. Retrieved from http://doi.org/10.1080/02699050701633072 Google Scholar
Rigon, A., Turkstra, L., Mutlu, B., & Duff, M. (2016). The female advantage: Sex as a possible protective factor against emotion recognition impairment following traumatic brain injury. Cognitive, Affective, & Behavioral Neuroscience, Published (May 31, 2016). Retrieved from http://doi.org/10.3758/s13415-016-0437-0 Google Scholar
Rosenberg, H., McDonald, S., Dethier, M., Kessels, R.P.C., & Westbrook, R.F. (2014). Facial emotion recognition deficits following moderate-severe traumatic brain injury (TBI): Re-examining the valence effect and the role of emotion intensity. Journal of the International Neuropsychological Society, 20, 9941003.Google Scholar
Rosip, J.C., & Hall, J.a. (2004). Knowledge of nonverbal cues, gender, and nonverbal decoding accuracy. Journal of Nonverbal Behavior, 28 (4), 267286. Retrieved from http://doi.org/10.1007/s10919-004-4159-6 Google Scholar
Russell, J.a, Bachorowski, J.-A., & Fernandez-Dols, J.-M. (2003). Facial and vocal expressions of emotion. Annual Review of Psychology, 54, 329349. Retrieved from http://doi.org/10.1146/annurev.psych.54.101601.145102 Google Scholar
Schirmer, A., & Kotz, S.A. (2003). ERP evidence for a sex-specific stroop effect in emotional speech. Journal of Cognitive Neuroscience, 15 (8), 11351148.Google Scholar
Schirmer, A., Kotz, S.A., & Friederici, A.D. (2002). Sex differentiates the role of emotional prosody during word processing. Cognitive Brain Research, 14, 228233.Google Scholar
Schirmer, A., Striano, T., & Friederici, A.D. (2005). Sex differences in the preattentive processing of vocal emotional expressions. Cognitive Neuroscience and Neuropsychology, 16 (2), 635639.Google Scholar
Schulz, J., & Pilz, K.S. (2009). Natural facial motion enhances cortical responses to faces. Experimental Brain Research, 194 (3), 465475.CrossRefGoogle Scholar
Spell, L.A., & Frank, E. (2000). Recognition of nonverbal communication of affect following traumatic brain injury. Journal of Nonverbal Behavior, 24 (4), 285300.Google Scholar
Thayer, J., & Johnson, B. (2000). Sex differences in judgement of facial affect: A multivariate analysis of recognition errors. Scandinavian Journal of Psychology, 41, 243246.Google Scholar
Thompson, A.E., & Voyer, D. (2014). Sex differences in the ability to recognise non-verbal displays of emotion: A meta-analysis. Cognition and Emotion, 28 (7), 11641195. Retrieved from http://doi.org/10.1080/02699931.2013.875889 Google Scholar
Turkstra, L.S. (2008). Conversation-based assessment of social cognition in adults with traumatic brain injury. Brain Injury, 22 (5), 397409. Retrieved from http://doi.org/10.1080/02699050802027059 Google Scholar
Walbott, H.G., & Scherer, K.R. (1986). Cues and channels in emotion recognition. Journal of Personality and Social Psychology, 51 (4), 690699. Retrieved from http://doi.org/10.1037/0022-3514.51.4.690 CrossRefGoogle Scholar
Wildgruber, D., Pihan, H., Ackermann, H., Erb, M., & Grodd, W. (2002). Dynamic brain activation during processing of emotional intonation: Influence of acoustic parameters, emotional valence, and sex. NeuroImage, 15, 856869. Retrieved from http://doi.org/10.1006/nimg.2001.0998 Google Scholar
Yim, J., Babbage, D.R., Zupan, B., Neumann, D., & Willer, B. (2013). The relationship between facial affect recognition and cognitive functioning after traumatic brain injury. Brain Injury, 27 (10), 11551161.Google Scholar
Zupan, B., Babbage, D.R., Neumann, D., & Willer, B. (2014). Recognition of facial and vocal affect following traumatic brian injury. Brain Injury, Early Onli, 28(8), 19. Retrieved from http://doi.org/10.3109/02699052.2014.901560 Google Scholar
Zupan, B., & Neumann, D. (2014). Affect recognition in traumatic brain injury: Responses to unimodal and multimodal media. The Journal of Head Trauma Rehabilitation, E1–E12. Retrieved from http://doi.org/10.1097/HTR.0b013e31829dded6 Google Scholar
Zupan, B., Neumann, D., Babbage, D.R., & Willer, B. (2009). The importance of vocal affect to bimodal processing of emotion: Implications for individuals with traumatic brain injury. Journal of Communication Disorders, 42 (1), 117. Retrieved from http://doi.org/10.1016/j.jcomdis.2008.06.001 Google Scholar
Zupan, B., Neumann, D., Babbage, D.R., & Willer, B. (2015). Exploration of a new tool for assessing emotional inferencing after traumatic brain injury. Brain Injury, 29(78). Retrieved from http://doi.org/10.3109/02699052.2015.1011233 Google Scholar
Figure 0

TABLE 1 Demographic Variables by Sex for Male and Female Participants

Figure 1

FIGURE 1 Mean response accuracy for men and women by emotion recognition task.

Figure 2

FIGURE 2 Mean response accuracy for males and females by emotion category and task.

Figure 3

FIGURE 3 Mean response accuracy by men and women for high- and low-intensity stimuli.