Individuals with autism spectrum conditions (ASC) experience clinically significant difficulty in their social or emotional communication, which can include marked impairment in the appropriate use of non-verbal signals (e.g., facial expressions), recognition or sharing of emotions, and responses to the emotions of others (American Psychiatric Association, 2013). Unusually narrow, rigid, or atypical interests are another characteristic of individuals with ASC. Although they describe a neurodevelopmental condition and a psychiatric diagnosis, ASC are the clinical extreme in a broad spectrum of neurodiversity. Indeed, autistic traits are commonly understood to occur along a continuum not only within clinical populations, but also among the general population (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012; Baron-Cohen et al., Reference Baron-Cohen, Jolliffe, Mortimore and Robertson1997; Ruzich et al., Reference Ruzich, Allison, Smith, Watson, Auyeung, Ring and Baron-Cohen2016).
Because emotion recognition is key to effective social interactions, research has focused on understanding the characteristics and limitations of this skill in individuals with ASC, or elevated autistic traits. On the one hand, many studies have demonstrated that individuals with ASC, or elevated autistic traits, have more difficulty interpreting the facial emotions of others than do those without ASC, or those who are low in autistic traits (Actis-Grosso et al., Reference Actis-Grosso, Bossi and Ricciardelli2015; Ashwin et al., Reference Ashwin, Chapman, Colle and Baron-Cohen2006; Celani et al., Reference Celani, Battacchi and Arcidiacono1999; Clark et al., Reference Clark, Winkielman and McIntosh2008; Golan et al., Reference Golan, Baron-Cohen and Hill2006; Hobson et al., Reference Hobson, Ouston and Lee1989; Ingersoll, Reference Ingersoll2010; Macdonald et al., Reference Macdonald, Rutter, Howlin, Rios, Le Conteur, Evered and Folstein1989; McKenzie et al., Reference McKenzie, Murray, Wilkinson, Murray, Metcalfe, O’Donnell and McCarty2018; Philip et al., Reference Philip, Whalley, Stanfield, Sprengelmeyer, Santos, Young, Atkinson, Calder, Johnstone, Lawrie and Hall2010; Poljac et al., Reference Poljac, Poljac and Wagemans2013; Sucksmith et al., Reference Sucksmith, Allison, Baron-Cohen, Chakrabarti and Hoekstra2013; Wallace et al., Reference Wallace, Coleman and Bailey2008). On the other hand, some research has failed to demonstrate a relationship between ASC or autistic traits and facial emotion recognition (Baron-Cohen et al., Reference Baron-Cohen, Jolliffe, Mortimore and Robertson1997; Harms et al., Reference Harms, Martin and Wallace2010; Jones et al., Reference Jones, Pickles, Falcaro, Marsden, Happé, Scott, Sauter, Tregay, Phillips, Baird, Simonoff and Charman2011; Miyahara et al., Reference Miyahara, Bray, Tsujii, Fujita and Sugiyama2007; Piggot et al., Reference Piggot, Kwon, Mobbs, Blasey, Lotspeich, Menon, Bookheimer and Reiss2004; Tracy et al., Reference Tracy, Robins, Schriber and Solomon2011; see Yeung, Reference Yeung2022 for a recent review). Uljarevic and Hamilton (Reference Uljarevic and Hamilton2013) conducted a meta-analysis of 48 studies comparing those formally diagnosed with ASC and those without ASC on their ability to accurately identify the emotions of human faces in behavioral tasks. Although the behavioral tasks used to assess emotion recognition varied from study to study, they most commonly involved asking participants to identify the emotions displayed in a series of faces, or to select the faces matching a series of emotions. After correcting for publication bias, the results of this meta-analysis supported the evidence that individuals with ASC show more difficulty with facial emotion recognition (Hedges’ g = 0.41).
Despite previous research converging on the general finding that ASC and elevated autistic traits are associated with deficits in facial emotion recognition, it remains unclear why only some individuals on the autism spectrum experience difficulty with recognizing facial emotions, and why those individuals developed this difficulty. In their meta-analysis, Uljarevic and Hamilton (Reference Uljarevic and Hamilton2013) found that factors such as age, intelligence (IQ), and type of behavioral task used in a study did not explain the deficits in facial emotion recognition found among individuals with ASC. They suggested instead that a lack of attending to some important features of human faces, such as the eyes, might explain these deficits. Indeed, a different line of research has revealed that individuals with ASC or elevated autistic traits may be less oriented to human faces (Black et al., Reference Black, Chen, Iyer, Lipp, Bölte, Falkmer, Tan and Girdler2017; Dawson et al., Reference Dawson, Webb and McPartland2005; Guillon et al., Reference Guillon, Rogé, Afzali, Baduel, Kruck and Hadjikhani2016; Klin et al., Reference Klin, Jones, Schultz, Volkmar and Cohen2002; Pelphrey et al., Reference Pelphrey, Sasson, Reznick, Paul, Goldman and Piven2002; Vettori et al., Reference Vettori, Dzhelyova, Van der Donck, Jacques, Van Wesemael, Steyaert, Rossion and Boets2020) and find viewing them less rewarding (Chevallier et al., Reference Chevallier, Grèzes, Molesworth, Berthoz and Happé2012; Cuve et al., Reference Cuve, Gao and Fuse2018; Silva et al., Reference Sizoo, Horwitz, Teunisse, Kan, Vissers, Forceville, Van Voorst and Geurts2015; Stuart et al., Reference Stuart, Whitehouse, Palermo, Bothe and Badcock2023; Tottenham et al., Reference Tottenham, Hertzig, Gillespie-Lynch, Gilhooly, Millner and Casey2014).
Autism spectrum, anime, and facial emotion recognition
Atherton and Cross (Reference Atherton and Cross2018) suggested that the reduced saliency of human faces for individuals with ASC or elevated autistic traits, combined with their propensity to develop strong circumscribed interests, results in their tendency to gravitate toward non-human social interests, such as cartoons. In line with this reasoning, Silva et al. (Reference Sizoo, Horwitz, Teunisse, Kan, Vissers, Forceville, Van Voorst and Geurts2015) found that children diagnosed with ASC showed a preference for viewing cartoon faces over human faces and were, in fact, avoidant of human faces. Jones et al. (Reference Jones, Pickles, Falcaro, Marsden, Happé, Scott, Sauter, Tregay, Phillips, Baird, Simonoff and Charman2011) reported survey data indicating that adolescents with ASC spend a large amount of time viewing screen-based media. Mazurek et al. (Reference Mazurek, Shattuck, Wagner and Cooper2012) and Shane and Albert (Reference Shane and Albert2008) similarly noted that parents of children with ASC reported electronic screen engagement, and especially animated television shows and movies, as their child’s most common leisure activity. Finally, individuals with ASC or elevated autistic traits have specifically shown a strong preference for anime (Atherton et al., Reference Atherton, Morimoto, Nakashima and Cross2023), a style of Japanese animation that has rapidly grown in popularity since the introduction of Pokémon in the 1990s (Allison, Reference Allison2006; Napier, Reference Napier2006; Otmazgin, Reference Otmazgin2014). Anime is a specific type of cartoon that has a distinct art style originating in Japan and features vibrant characters with exaggerated facial expressions. Cartoons include anime and a range of other art styles, but the term usually refers to Western animation. One study analyzed the constricted interests that tend to characterize individuals on the autism spectrum, revealing that anime was the most frequently reported topic of interest (South et al., Reference South, Ozonoff and McMahon2005). Another study examining the media use of adolescents with ASC found that websites focused on anime were the second most popular among all the websites that they visited, after websites discussing video games (Kuo et al., Reference Kuo, Orsmond, Coster and Cohn2014).
The well-documented interest in viewing animated media and the specific preference for anime among individuals on the autism spectrum have been attributed to the use of distinct characters in these media, where facial expressions are exaggerated to the point of caricature (Atherton et al., Reference Atherton, Morimoto, Nakashima and Cross2023; Liu et al., Reference Liu, Chen and Chang2019). When anime characters show surprise, excitement, or happiness, for example, their eyes widen to comically unrealistic proportions. Similarly, sadness in anime characters is often expressed through a waterfall of tears literally splashing down their faces. These exaggerated displays of emotional expression in the faces of anime characters might make them especially clear and salient even to individuals with ASC or elevated autistic traits. Consistent with this idea, Berthoz et al. (Reference Berthoz, Lalanne, Crane and Hill2013), Rump et al. (Reference Rump, Giovannelli, Minshew and Strauss2009), and Song and Hakoda (Reference Song and Hakoda2018) found that individuals with ASC were less impaired in their facial emotion recognition when presented with exaggerated, overt emotional expressions, but had more difficulty accurately interpreting subtle facial emotions. Rozema (Reference Rozema2015) suggested that the visual tropes used in media like anime to communicate emotions provide clear and repetitive visual cues, allowing individuals on the autism spectrum to recognize and remember underlying patterns of emotional expressions more easily. As a result, individuals with ASC or elevated autistic traits may develop a particular affinity for anime relative to other interests.
While it has been suggested that individuals on the autism spectrum prefer anime because they can more easily understand the often-exaggerated facial emotions of the characters, no previous studies of facial emotion recognition have used anime characters to test this possibility. Furthermore, there have not been many previous studies that directly examined facial emotion recognition for animated or cartoon faces in individuals with ASC or elevated autistic traits. In one study, Rosset et al. (Reference Rosset, Rondan, Da Fonseca, Santos, Assouline and Deruelle2008) found that children with ASC processed cartoon faces like typically developing children processed human faces, and their facial emotion recognition when viewing cartoon faces was not impaired to the same degree as it was when viewing human faces. Brosnan et al.’s (Reference Brosnan, Johnson, Grawmeyer, Chapman and Benton2015) study supported and expanded these findings, showing that adolescents with ASC not only performed better when viewing cartoon faces compared with human faces, but they also outperformed those without ASC in accurately recognizing emotions expressed in static cartoon faces. Similarly, Cross et al. (Reference Cross, Farha and Atherton2019) found that adolescents with ASC showed improvement in facial emotion recognition when human faces were put through an animal filter and thus appeared like anthropomorphic lion or gorilla faces. In another study, Atherton and Cross (Reference Atherton and Cross2019) showed that the use of animal versus human characters in social stories was related to improved recognition of social norm violations among those high in autistic traits. Finally, two recent studies examined the extent to which ASC (Cross et al., Reference Cross, Piovesan and Atherton2022) or elevated autistic traits (Atherton & Cross, Reference Atherton and Cross2022) were related to facial emotion recognition accuracy using both a human and a cartoon version of a test for reading emotions in the eyes. They found that adults with ASC were no different, and adults with elevated autistic traits were significantly worse than those without ASC or with low autistic traits on the human test. However, the group with ASC performed better, and the group with elevated autistic traits no differently, when the cartoon version of the test was used. These findings suggest that facial emotion recognition deficits in individuals with ASC or high in autistic traits are not global, but specific to the evaluation of human faces.
Autism spectrum, alexithymia, and facial emotion recognition
Alexithymia is a subclinical trait characterized by difficulties in identifying and describing feelings, associating bodily sensations with specific feeling states, and using words to express emotions (Berthoz et al., Reference Berthoz, Pouga, Wessa, Decety and Cacioppo2011; Nemiah et al., Reference Nemiah, Freyberger, Sifneos and Hill1976). Like autistic traits, alexithymia appears to exist along a continuum, with higher levels reflecting more difficulty in the cognitive processing and regulation of emotions (Taylor et al., Reference Sucksmith, Allison, Baron-Cohen, Chakrabarti and Hoekstra1997). Studies have revealed a much higher rate of alexithymia among individuals on the autism spectrum relative to the general population, with estimates for autistic populations ranging from 50%–85% (Berthoz & Hill, Reference Berthoz and Hill2005; Berthoz et al., Reference Berthoz, Lalanne, Crane and Hill2013; Hill et al., Reference Hill, Berthoz and Frith2004). For comparison, the prevalence rate of alexithymia in the general population is estimated to be closer to 10% (Linden et al., Reference Linden, Wen, Paulhus, Butcher and Spielberger1995).
Some researchers have found evidence that the difficulties in emotion recognition observed among individuals on the autism spectrum might not be the result of social deficits characteristic of ASC or autistic traits, but rather the frequently co-existing condition of alexithymia (Bernhardt et al., Reference Bernhardt, Valk, Silani, Bird, Frith and Singer2014; Bird & Cook, Reference Bird and Cook2013; Cook et al., Reference Cook, Brewer, Shah and Bird2013; Heaton et al., Reference Heaton, Reichenbacher, Sauter, Allen, Scott and Hill2012; Milosavljevic et al., Reference Milosavljevic, Carter Leno, Simonoff, Baird, Pickles, Jones, Erskine, Charman and Happé2016; Oakley et al., Reference Oakley, Brewer, Bird and Catmur2016; Ola & Gullon-Scott, Reference Ola and Gullon-Scott2020; Santiesteban et al., Reference Santiesteban, Gibbard, Drucks, Clayton, Banissy and Bird2021; Trevisan et al., Reference Trevisan, Bowering and Birmingham2016). Indeed, one study found that reduced eye gaze to facial emotional expressions (e.g., lower fixation count and duration) was associated more with alexithymia than with either ASC or autistic traits in a sample of adults with and without a formal diagnosis (Cuve et al., Reference Cuve, Castiello, Shiferaw, Ichijo, Catmur and Bird2021). In another study examining the role of alexithymia and anxiety in the relationship between empathic ability and autistic traits, Brett and Maybery (Reference Brett and Maybery2022) found that autistic traits predicted empathy through alexithymia and anxiety.
Based on their systematic review and meta-analysis of 15 studies, Kinnaird et al. (Reference Kinnaird, Stewart and Tchanturia2019) concluded that it was common but not universal for individuals on the autism spectrum to experience co-occurring alexithymia, finding a mean prevalence rate of 50%. They further suggested that emotional recognition difficulties traditionally associated with ASC or autistic traits might be better explained by alexithymia, following previous research (e.g., Bird & Cook, Reference Bird and Cook2013). Most recently, results from factor analysis of measures widely used to assess alexithymia and autistic traits indicated that these two constructs were distinct, because items from each measure loaded on separate factors (Cuve et al., Reference Cuve, Murphy, Hobson, Ichijo, Catmur and Bird2022). Network analyses similarly produced separate clusters comprising only of items representing either alexithymic or autistic traits.
Considering that alexithymia might characterize approximately half or more of the autistic population, other researchers have argued that alexithymia is a core feature or consequence of ASC or autistic traits rather than a distinct condition (Gaigg, Reference Gaigg2012; Quattrocki & Friston, Reference Quattrocki and Friston2014). They have also pointed to the problems of relying on self-report measures to assess alexithymic and autistic traits, noting that self-report could bias the conclusions drawn from this body of research (Gaigg et al., Reference Gaigg, Cornell and Bird2018). The significant overlap between alexithymia and ASC or autistic traits has led to the theory that they share the same underlying developmental process leading to their observed social and emotional deficits. Several neuroimaging studies have contributed evidence to support this theory (Grynberg et al., Reference Grynberg, Chang, Corneille, Maurage, Vermeulen, Berthoz and Luminet2012; Moriguchi et al., Reference Moriguchi, Ohnishi, Lane, Maeda, Mori, Nemoto, Matsuda and Komaki2006; Silani et al., Reference Silani, Bird, Brindley, Singer, Frith and Frith2008), showing that there are some neural components potentially shared between individuals high in alexithymia and individuals on the autism spectrum, specifically those brain regions responsible for perspective taking and mentalizing (i.e., understanding the mental states of the self and others).
Discussing the complex relationship between alexithymia and ASC, Poquérusse et al. (Reference Poquérusse, Pastore, Dellantonio and Esposito2018) acknowledged that there is no clear answer as to whether they are co-occurring conditions or whether alexithymia is a common component of ASC. In the absence of consensus on whether ASC and alexithymia (or autistic and alexithymic traits more broadly) may be considered distinct but commonly co-occurring conditions relevant to facial emotion recognition deficits, our hypotheses and study design were guided by an understanding that it would be important to evaluate the role of both autistic and alexithymic traits on facial emotion recognition.
The present research
To address gaps in the previous literature, the present study used a computerized task to examine not only whether individuals higher in autistic traits are less accurate in recognizing the facial emotional expressions of human characters, consistent with past studies, but also whether they do not experience this same difficulty in recognizing emotions when presented with anime faces. In other words, we hypothesized that elevated autistic traits would be associated with lower emotion recognition scores when human faces are used as targets, but not when anime faces are used as targets. Anime faces were chosen because they are drawn with exaggerated facial expressions that may help to reduce some of the visual processing deficits associated with ASC and autistic traits (Atherton et al., Reference Atherton, Morimoto, Nakashima and Cross2023; Liu et al., Reference Liu, Chen and Chang2019; Rozema, Reference Rozema2015), and because there is a demonstrated preference for anime among individuals in the autistic community (Kuo et al., Reference Kuo, Orsmond, Coster and Cohn2014; South et al., Reference South, Ozonoff and McMahon2005).
A measure of alexithymia was included to examine whether higher scores on this subclinical trait are also related to more difficulty recognizing human and anime facial emotions, and whether alexithymia can fully or in part explain the associations between autistic traits and facial emotion recognition. Although we hypothesized that elevated alexithymia would be associated with lower human facial emotion recognition scores, we did not have a strong prediction about the relationship between alexithymia and anime facial emotion recognition scores. However, we expected that when autistic traits and alexithymia control for one another in predicting human and anime facial emotion recognition scores, alexithymia would explain more of the variance in those facial emotion recognition scores than autistic traits.
If individuals higher in autistic traits do not show the same difficulty recognizing the facial emotional expressions of anime characters as they do the expressions of human characters, these findings could inform the development of more effective clinical interventions aimed at improving facial emotion recognition and positive social behaviors among individuals on the autism spectrum. If alexithymia is found to be more relevant than autistic traits in explaining facial emotion recognition, interventions could more specifically target alexithymia.
Method
Participants
Participants were 247 adults (129 male, 115 female, 2 non-binary, and 1 other) ranging in age from 18 to 63 years old (M = 27.65 years, SD = 10.63). They included 125 who were recruited from the psychology participant pool at Pennsylvania State University Abington, and another 122 from the crowdsourcing website Amazon Mechanical Turk (https://www.mturk.com/mturk/welcome). Amazon Mechanical Turk connects paid volunteers with researchers who are looking for participants to complete their studies. Participants engaged in this online study on their own computer and at their own convenience by first choosing from a list of available studies either through Amazon Mechanical Turk or the psychology participant pool, and then by being directed to a website to complete the study. After completing the study, participants from the psychology participant pool received credits toward a research requirement in their psychology course, and those recruited from Amazon Mechanical Turk were compensated $1.00. A wage analysis of 4,500 studies hosted on Amazon Mechanical Turk between 2015 and 2019 estimated that participants were completing studies for an average of $6.00–$7.00 an hour (Moss et al., Reference Moss, Rosenzweig, Robinson, Jaffe and Litman2023). Because our study had a median completion time of about 10 minutes, payment of $1.00 closely approximated the average hourly earnings for participants on Amazon Mechanical Turk. Furthermore, we were limited by funding constraints. This study and its procedures (including compensation) were approved by Pennsylvania State University’s Institutional Review Board.
Participants from Amazon Mechanical Turk were significantly older (M = 35.45 years, SD = 8.90) than those from the psychology participant pool (M = 20.03 years, SD = 5.33), t(197) = 16.46, p < .0001. This age difference was expected, because we intentionally recruited participants from Amazon Mechanical Turk to obtain a more representative sample, knowing that the psychology participant pool is only open to undergraduate students at our campus. While the reliability and validity of data collected on Amazon Mechanical Turk have been supported by some researchers (Buhrmester et al., Reference Buhrmester, Talaifar and Gosling2018; Paolacci & Chandler, Reference Paolacci and Chandler2014), others have recently raised concerns about data collection on this platform (Chmielewski & Kucker, Reference Chmielewski and Kucker2020; Newman et al., Reference Newman, Bavik, Mount and Shao2021; Webb & Tangney, Reference Webb and Tangney2024).
Participants from either recruitment source were excluded from all data analyses if they did not reach the end of the study (n = 18) or completed it in less than five minutes (n = 8). We further excluded those participants who provided the same rating to all 10 items on the Short Autism Spectrum Quotient (n = 5) or to all 23 items on the Revised Toronto Alexithymia Scale (n = 8), because these response patterns suggest that they were not paying close attention to the questions (see next section for descriptions of these measures). The numbers of participants excluded for these reasons were not mutually exclusive, and participants excluded for one reason often qualified to be excluded for another reason. Interestingly, all participants excluded for not reaching the end of the study or completing it in less than five minutes were from Amazon Mechanical Turk, while all but two participants excluded for providing the same rating to all items on at least one of the measures were from the psychology participation pool.
Measures
Short Autism Spectrum Quotient
A short form of the Autism Spectrum Quotient (AQ-10; Allison et al., Reference Allison, Auyeung and Baron-Cohen2012) was previously developed and validated for measuring autistic traits with 10 items. Example items include statements intended to assess social deficits: “I find it difficult to work out people’s intentions,” or “I find it easy to work out what someone is thinking or feeling just by looking at their face.” Example items also include statements intended to assess unusually narrow, rigid, or atypical interests, among other autistic traits: “I like to collect information about categories of things (e.g. types of car, types of bird, types of train, types of plant)” and “I often notice small sounds when others do not.” For each of the 10 items, participants were asked whether they definitely agree, slightly agree, slightly disagree, or definitely disagree with the statement. On normally coded items, responses of definitely agree or slightly agree earned one point. On reverse coded items, responses of definitely disagree or slightly disagree earned one point. Thus, scores on this measure range from 0–10. Higher scores on the AQ-10 indicate higher levels of autistic traits. The internal consistency of the AQ-10 in this sample was low (α = 0.45) but consistent with previous findings (Jia et al., Reference Jia, Steelman and Jia2019; Sizoo et al., Reference Sizoo, Horwitz, Teunisse, Kan, Vissers, Forceville, Van Voorst and Geurts2015; Taylor et al., Reference Taylor, Livingston, Clutterbuck and Shah2020).
Revised Toronto Alexithymia Scale
The Revised Toronto Alexithymia Scale (TAS-R; Taylor et al., Reference Taylor, Bagby and Parker1992) is a 23-item scale used to measure alexithymia, which comprises the inability (or limited ability) of a person to experience, identify, or describe emotions, or to relate them to the stimulus that caused them. Example items include statements such as: “I have feelings that I can’t quite identify,” or “I often don’t know why I am angry.” Participants were asked to indicate on a 5-point Likert scale the extent to which they agree or disagree with each item (1 = strongly disagree, 2 = moderately disagree, 3 = neither agree nor disagree, 4 = moderately agree, or 5 = strongly agree). Of the 23 items, six are reverse coded, such that responses of strongly agree or moderately agree are scored as 1 and 2, respectively, and responses of moderately disagree or strongly disagree are scored as 4 and 5, respectively. A total score on the TAS-R is obtained by taking the sum of responses to all 23 items, resulting in a range of 23 to 115, with higher scores indicating higher levels of alexithymia. Taylor et al. (Reference Taylor, Bagby and Parker1992) found that the mean score for individuals clinically judged to meet criteria for alexithymia was 66.4 (SD = 13.4), while the mean score for individuals who were not judged to meet criteria was 56.7 (SD = 12.2). The internal consistency of the TAS-R in this sample was high (α = 0.88).
Facial emotion recognition test
Participants completed a facial emotion recognition test on their computer. First, they were randomly presented with two human and two anime faces to become familiar with the task. This brief practice was then followed by a series of 12 human and 12 anime faces presented in random order. Each face expressed one of six different emotions: happiness, sadness, anger, fear, disgust, or surprise. These emotions were chosen based on previous research, which found that the facial expressions of these six emotions are universally recognizable across all cultures (Ekman & Friesen, Reference Ekman and Friesen1971; Izard et al., Reference Izard, Kagan and Zajonc1984; Izard, Reference Izard1994; Matsumoto, Reference Matsumoto and Matsumoto2001). During the test, each face was displayed for three seconds before disappearing, at which point the participant was asked to choose which of the six emotions was expressed by the face they were just shown. Participants earned one point for each facial expression they identified accurately, for a total of 12 possible points on the human section of the facial emotion recognition test, and 12 possible points on the anime section. The timing of three seconds was chosen after reviewing a previous study that analyzed multiple facial emotion recognition tests, finding that two to three seconds was ideal for preventing ceiling effects, wherein neurotypical individuals can too easily receive perfect scores (Wilhelm et al., Reference Wilhelm, Hildebrandt, Manske, Schacht and Sommer2014).
The human faces in the facial emotion recognition test were sourced from a previously validated set of faces with differing expressions, called the Warsaw Set of Emotional Facial Expression Pictures (Olszanowski et al., Reference Olszanowski, Pochwatko, Kuklinski, Scibor-Rylski, Lewinski and Ohme2015). This collection includes front-facing images of 30 White European individuals with acting experience, expressing a variety of emotions. These images were rated by over 1,300 independent judges to determine how well the facial expressions matched the intended emotion. For the present study, we chose the six male and six female human faces that were determined by judges to most accurately depict each of the six universally recognizable emotions. Thus, the final stimulus set included 12 different human faces: a male and a female face expressing happiness, a male and a female face expressing sadness, a male and a female face expressing anger, a male and a female face expressing fear, a male and a female face expressing disgust, and a male and a female face expressing surprise. For the practice section of the facial emotion recognition test, we used a male human face expressing sadness and a female human face expressing happiness that were different from those included in the final stimulus set.
Because no previously validated set of anime facial expressions existed, images were collected by the first author using Google Images for the sole purposes of this study. Using the search terms anime, manga, facial expressions, emotion, and facial emotional expressions in combination with one another, a preliminary compilation of 55 images was created. Due to constraints in time and funding, we could not recruit a separate sample for a pilot study in which these images of anime faces could be systematically rated on how much they represented the emotions that they were supposed to depict. Because we could not validate the anime stimuli with a pilot study, we reviewed and discussed the images among ourselves before choosing the 12 images that were determined to best express each of the six universally recognizable emotions. The final stimulus set included one male and one female anime face expressing happiness, one male and one female anime face expressing sadness, two male and one female anime face expressing anger, two female anime faces expressing fear, one male and one female anime face expressing disgust, and one male anime face expressing surprise. The same male anime character was used to express three of the six emotions (surprise, sadness, and happiness), while the other nine anime characters were unique. For the practice section of the facial emotion recognition test, we used a male anime face expressing happiness and the same male anime face expressing anger. This anime character was also different from the others. Because anime faces are less familiar to the general population, we provide the complete set of images (both human and anime faces) used in the facial emotion recognition test at the following link: https://osf.io/qyzgd.
Potentially confounding variables
To address potential confounds, participants answered the following two additional questions about their frequency of social interaction and their frequency of anime or manga use, respectively: “Approximately how often do you have face-to-face social interactions with others?” and “Approximately how often do you watch anime or read manga?” Responses to both questions were on an 8-point scale (1 = never, 2 = less than once per year, 3 = once per year, 4 = once per month, 5 = once per week, 6 = 2–3 times per week, 7 = once per day, or 8 = every day, multiple times per day).
Results
Preliminary analyses
Table 1 presents the means, standard deviations, and correlations among the study variables, including age, frequency of social interaction, frequency of anime or manga use, AQ-10, TAS-R, human facial emotion recognition score, and anime facial emotion recognition score. Due to the non-normal distributions for most of the study variables, Spearman’s rank order correlation coefficients were used for tests of associations instead of Pearson’s correlation coefficients.
Table 1. Means, standard deviations, and correlations among study variables

Note. N = 247. All correlations used Spearman’s ρ. AQ-10 = Short Autism Spectrum Quotient (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012); TAS-R = Revised Toronto Alexithymia Scale (Taylor et al., Reference Taylor, Bagby and Parker1992). *p < .05, **p < .005, ***p < .0001.
As shown in Table 1, participants reported high frequencies of social interaction (i.e., the relevant distribution was negatively skewed), but moderate and more varied frequencies of anime or manga use. Scores on the AQ-10 also varied, and their distribution was positively skewed. Interestingly, 40 participants (16% of the sample) scored 6 or higher on this scale, which is the cutoff that has been used to refer individuals for the assessment of clinically diagnosed ASC (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012). Scores on the TAS-R showed considerable variation and a slight negative skew in their distribution. Finally, the distribution of scores on the human section of the facial emotion recognition test was negatively skewed, with most participants correctly identifying the emotions of more than 9 out of 12 faces, while scores on the anime section were more normally distributed.
There was a strong positive correlation between scores on the human section and scores on the anime section of the facial emotion recognition test, r s (245) = .47, p < .0001. Furthermore, the potentially confounding variables of age, frequency of social interaction, and frequency of anime or manga use were significantly correlated not only with TAS-R, but also with scores on the human and anime facial emotion recognition test (see Table 1). The one exception was that frequency of anime or manga use was negatively, but not significantly correlated with anime facial emotion recognition scores, r s (245) = −.07, p = .2967.
Correlations of AQ-10 and TAS-R with human and anime facial emotion recognition
Table 1 shows that in accordance with our hypothesis, higher scores on the AQ-10 were negatively and significantly correlated with performance on the human facial emotion recognition test, r s (245) = −.20, p = .0015, but not with performance on the anime facial emotion recognition test, r s (245) = −.05, p = .3937. Thus, participants with more autistic traits tended to have more difficulty in accurately recognizing emotional expressions in human faces. When anime faces were presented, however, those higher in autistic traits did not show significantly more difficulty with emotion recognition than did those lower in autistic traits. Figure 1 visually depicts the correlations between scores on the AQ-10 and scores on the human and anime facial emotion recognition tests in the form of two scatterplots.

Figure 1. Correlations of autistic traits with performance on the human (left) and anime (right) facial emotion recognition tests. Note. N = 247. Points represent individual participants. Shaded regions represent 95% confidence intervals. AQ-10 = Short Autism Spectrum Quotient (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012).
Table 1 shows that higher scores on the TAS-R were negatively and significantly correlated with performance on both the human facial emotion recognition test, r s (245) = −.44, p < .0001, and the anime facial emotion recognition test, r s (245) = −.37, p < .0001. In contrast to the AQ-10, TAS-R showed much stronger and more significant associations with both human and anime versions of the facial emotion recognition test. Participants with more alexithymia tended to have more difficulty with recognizing the emotions in faces regardless of whether they were human or anime, although the correlation was smaller for anime faces. Figure 2 visually depicts the correlations between scores on the TAS-R and scores on the human and anime facial emotion recognition tests in the form of two scatterplots.

Figure 2. Correlations of alexithymia with performance on the human (left) and anime (right) facial emotion recognition tests. Note. N = 247. Points represent individual participants. Shaded regions represent 95% confidence intervals. TAS-R = Revised Toronto Alexithymia Scale (Taylor et al., Reference Taylor, Bagby and Parker1992).
Multiple regression analyses
Two hierarchical multiple regression analyses were conducted to determine the extent to which autistic traits and alexithymia were unique predictors of performance on both the human and anime facial emotion recognition tests. In the first model of the hierarchical multiple regression, we entered age, frequency of social interaction, and frequency of anime or manga use as predictor variables, because these variables were significantly associated with performance on either or both of the facial emotion recognition tests. In the second model of the hierarchical multiple regression, we entered the two competing variables of interest, AQ-10 and TAS-R, as predictor variables in addition to those included in the first model. The dependent variables for these two hierarchical multiple regression analyses were scores on the human and anime facial emotion recognition tests. Before conducting multiple regression analyses, the variance inflation factors of predictor variables were examined to ensure a lack of multicollinearity among them. Variance inflation factors did not exceed 1.30, which indicated that each predictor variable contained sufficiently unique information over and above that provided by the others.
Table 2 presents the results of the two hierarchical multiple regression analyses among participants. In the first model, it was revealed that age (β = −0.18, p = .0043), frequency of social interaction (β = 0.19, p = .0022), and frequency of anime or manga use (β = −0.30, p < .0001) were unique and significant predictors of human facial emotion recognition scores, even with each predictor variable controlling for one another. Thus, participants who were older and watched more anime or read more manga scored lower on the human facial emotion recognition test, while those who had more frequent social interactions scored higher. For anime facial emotion recognition scores, only age (β = −0.37, p < .0001) and frequency of social interaction (β = 0.15, p = .0160) were unique and significant predictors controlling for one another. Participants who were older scored lower on the anime facial emotion recognition test, while those who had more frequent social interactions scored higher. Frequency of anime or manga use (β = –0.05, p = .3627), however, was not significantly associated with performance on the anime facial emotion recognition test.
Table 2. Hierarchical multiple regression results for age, frequency of social interaction, frequency of anime or manga use, autistic traits, and alexithymia predicting human and anime facial emotion recognition scores

Note. N = 247. All values are partial regression coefficients (standardized beta weights) or standard errors (as indicated in parentheses) except for those under F, R 2, and R 2adj, which are the F ratio, coefficient of determination, and adjusted coefficient of determination, respectively. AQ-10 = Short Autism Spectrum Quotient (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012); TAS-R = Revised Toronto Alexithymia Scale (Taylor et al., Reference Taylor, Bagby and Parker1992). *p < .05, **p < .005, ***p < .0001.
In the second model, with the addition of AQ-10 and TAS-R as predictor variables, it was revealed that age (β = −0.15, p = .0140), frequency of social interaction (β = 0.14, p = .0198), frequency of anime or manga use (β = −0.22, p = .0002), and TAS-R (β = −0.28, p < .0001) were unique and significant predictors of human facial emotion recognition scores, even with each variable controlling for one another. Thus, participants who were older, watched more anime or read more manga, and experienced more alexithymia scored lower on the human facial emotion recognition test, while those who had more frequent social interactions scored higher. For anime facial emotion recognition scores, only age (β = −0.33, p < .0001) and TAS-R (β = −0.27, p < .0001) were unique and significant predictors controlling for one another. Participants who were older and experienced more alexithymia scored lower on the anime facial emotion recognition test, and their frequency of social interaction (β = 0.11, p = .0786) no longer significantly predicted their performance. Regardless of whether the multiple regression model was predicting performance on the human or anime facial emotion recognition test, autistic traits (β = −0.04, p = .4523 and β = 0.01, p = .8356, respectively) were not significantly predictive after controlling for alexithymia and the other three variables, in contrast to the zero-order correlations.
The first model explained about 20% of the variance in both human [F(3, 243) = 19.92, p < .0001] and anime facial emotion recognition scores [F(3, 243) = 20.48, p < .0001] among participants. The second and final model improved slightly on the first model, explaining about 27% of the variance in human facial emotion recognition scores [F(5, 241) = 18.03, p < .0001] and about 26% of the variance in anime facial emotion recognition scores [F(5, 241) = 17.09, p < .0001] among participants. For human facial emotion recognition scores, TAS-R was the strongest contributor, followed by frequency of anime or manga use. For anime facial emotion recognition scores, age was the strongest contributor to the variance, followed by TAS-R.
Mediation analyses
The zero-order correlation between AQ-10 and human (but not anime) facial emotion recognition scores was significant, as was the zero-order correlation between AQ-10 and TAS-R. After controlling for TAS-R in the second model of the hierarchical multiple regression analyses, however, AQ-10 was no longer significantly associated with performance on the human facial emotion recognition test. These results suggest that TAS-R fully mediated the relationship between AQ-10 and human facial emotion recognition scores, based on the procedures from Baron and Kenny (Reference Baron and Kenny1986). To further investigate this possibility, mediation analyses were conducted to determine the extent to which alexithymia mediated the relationship between autistic traits and performance on both the human and anime facial emotion recognition tests. We tested for mediation of autistic traits and anime facial emotion recognition scores through alexithymia, despite finding no significant zero-order correlation between AQ-10 and anime emotion scores, because many researchers have suggested that mediation is still possible under such conditions (Collins et al., Reference Collins, Graham and Flaherty1998; Preacher & Hayes, Reference Preacher and Hayes2004; Rucker et al., Reference Rucker, Preacher, Tormala and Petty2011; Shrout & Bolger, Reference Shrout and Bolger2002; Zhao et al., Reference Zhao, Lynch and Chen2010).
Figure 3 presents a path diagram for the mediation analysis in which AQ-10 predicted human facial emotion recognition scores through TAS-R. The standardized regression coefficient between AQ-10 and TAS-R was statistically significant (β = 0.31, p < .0001), as was the standardized regression coefficient between TAS-R and human facial emotion recognition scores (β = –0.41, p < .0001). The standardized indirect effect of AQ-10 on human facial emotion recognition scores was (0.31)( −0.41) = −0.13. We tested the significance of this indirect effect using bootstrapping procedures. Unstandardized indirect effects were computed for each of 1,000 bootstrapped samples, and the 95% confidence interval (CI) was computed with the indirect effects at the 2.5 and 97.5 percentiles. The bootstrapped unstandardized indirect effect of AQ-10 on human facial emotion recognition scores was −0.16, 95% CI = [−0.24, −0.09], p < .0001. These results show that despite the lack of a direct effect of AQ-10 on human facial emotion recognition scores (β = −0.02, p = .6913), there was a significant indirect effect of AQ-10 through TAS-R.

Figure 3. Path diagram for the mediation analysis in which autistic traits predicted human facial emotion recognition scores through alexithymia. Note. N = 247. Standardized regression coefficients are depicted for the relationship between autistic traits and performance on the human facial emotion recognition test as mediated by alexithymia. The standardized regression coefficient between autistic traits and human facial emotion recognition scores, controlling for alexithymia, is in parentheses. Brackets indicate 95% confidence intervals for each standardized regression coefficient. AQ-10 = Short Autism Spectrum Quotient (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012); TAS-R = Revised Toronto Alexithymia Scale (Taylor et al., Reference Taylor, Bagby and Parker1992). *p < .0001.
Figure 4 presents a path diagram for the mediation analysis in which AQ-10 predicted anime facial emotion recognition scores through TAS-R. The standardized regression coefficient between AQ-10 and TAS-R was statistically significant (β = 0.31, p < .0001), as was the standardized regression coefficient between TAS-R and anime facial emotion recognition scores (β = −0.37, p < .0001). The standardized indirect effect of AQ-10 on anime facial emotion recognition scores was (0.31)(−0.37) = −0.12. We tested the significance of this indirect effect using bootstrapping procedures. Similar to the previous mediation analysis, unstandardized indirect effects were computed for each of 1,000 bootstrapped samples, and the 95% CI was computed with the indirect effects at the 2.5 and 97.5 percentiles. The bootstrapped unstandardized indirect effect of AQ-10 on anime facial emotion recognition scores was −0.16, 95% CI = [−0.25, –0.08], p < .0001. These results show that despite the lack of a direct effect of AQ-10 on anime facial emotion recognition scores (β = 0.07, p = .2971), there was a significant indirect effect of AQ-10 through TAS-R.

Figure 4. Path diagram for the mediation analysis in which autistic traits predicted anime facial emotion recognition scores through alexithymia. Note. N = 247. Standardized regression coefficients are depicted for the relationship between autistic traits and performance on the anime facial emotion recognition test as mediated by alexithymia. The standardized regression coefficient between autistic traits and anime facial emotion recognition scores, controlling for alexithymia, is in parentheses. Brackets indicate 95% confidence intervals for each standardized regression coefficient. AQ-10 = Short Autism Spectrum Quotient (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012); TAS-R = Revised Toronto Alexithymia Scale (Taylor et al., Reference Taylor, Bagby and Parker1992). *p < .0001.
Comparing analyses between participants from the psychology participant pool and Amazon Mechanical Turk
Supplementary Tables 1–6 present the results of analyses from the previous sections, separated by whether participants were from the subsample recruited from the psychology participant pool (n = 125) or Amazon Mechanical Turk (n = 122). The same patterns emerged regardless of whether analyses used the full sample (N = 247) or either of the subsamples. Thus, our discussion of findings will only focus on those involving the full sample to avoid repetition.
Discussion
The purpose of this study was to examine whether individuals higher in autistic traits experience more difficulty with recognizing and interpreting the emotional expressions of human faces, while not showing such a deficit for anime faces. Based on previous research that compared the performance of individuals with ASC or elevated autistic traits on human and cartoon versions of facial emotion recognition tests (Atherton & Cross, Reference Atherton and Cross2022; Brosnan et al., Reference Brosnan, Johnson, Grawmeyer, Chapman and Benton2015; Cross et al., Reference Cross, Farha and Atherton2019, Reference Cross, Piovesan and Atherton2022; Rosset et al., Reference Rosset, Rondan, Da Fonseca, Santos, Assouline and Deruelle2008), we hypothesized that participants higher in autistic traits would perform worse at a facial emotion recognition test involving human stimuli but not one involving anime stimuli, due to their simplified and exaggerated expressions. We also hypothesized that participants higher in alexithymia would perform worse at the human facial emotion recognition test, and that alexithymia would be more strongly associated than autistic traits with scores on the human and anime facial emotion recognition tests, following previous studies (e.g., Bird & Cook, Reference Bird and Cook2013; Cook et al., Reference Cook, Brewer, Shah and Bird2013; Cuve et al., Reference Cuve, Castiello, Shiferaw, Ichijo, Catmur and Bird2021).
Our hypotheses were supported by the data. Participants higher in autistic traits performed significantly worse at the facial emotion recognition test featuring human stimuli. This finding supports both the validity of the human facial emotion recognition test and previous research demonstrating that individuals on the autism spectrum tend to have greater difficulty interpreting the meaning of human facial expressions (e.g., McKenzie et al., Reference McKenzie, Murray, Wilkinson, Murray, Metcalfe, O’Donnell and McCarty2018; Poljac et al., Reference Poljac, Poljac and Wagemans2013; Uljarevic & Hamilton, Reference Uljarevic and Hamilton2013).
In contrast to the significant negative correlation between scores on the AQ-10 and performance at recognizing human faces, there was no significant correlation between AQ-10 scores and performance on the anime facial emotion recognition test. This finding suggests that while it is not easier for individuals higher in autistic traits to recognize the facial expressions of anime characters compared with individuals lower in autistic traits, it is also not more difficult for them. Past work has found similar evidence that the facial emotion recognition of individuals with ASC or elevated autistic traits is less impaired when viewing cartoon as opposed to human faces (Atherton & Cross, Reference Atherton and Cross2022; Brosnan et al., Reference Brosnan, Johnson, Grawmeyer, Chapman and Benton2015; Cross et al., Reference Cross, Farha and Atherton2019, Reference Cross, Piovesan and Atherton2022; Rosset et al., Reference Rosset, Rondan, Da Fonseca, Santos, Assouline and Deruelle2008). We extend this evidence for the first time to anime faces, using a newly developed facial emotion recognition test. The exaggerated facial expressions that are characteristic of anime characters may operate as a protective factor against the deficits in facial emotion recognition typically seen in individuals with ASC or elevated autistic traits (Atherton et al., Reference Atherton, Morimoto, Nakashima and Cross2023; Liu et al., Reference Liu, Chen and Chang2019; Rozema, Reference Rozema2015). For this reason, individuals on the autism spectrum may be especially drawn to anime (Atherton & Cross, Reference Atherton and Cross2018), and it has been shown that they have an affinity for this type of media compared to other interests (Kuo et al., Reference Kuo, Orsmond, Coster and Cohn2014; South et al., Reference South, Ozonoff and McMahon2005). In our sample, more autistic traits were slightly correlated with more frequency of anime or manga use, although this relationship was not significant.
Consistent with our hypotheses and previous research (e.g., Bird & Cook, Reference Bird and Cook2013), the multiple regression and mediation analyses revealed a potentially important caveat in the associations between autistic traits and facial emotion recognition, which is the overlapping relevance of alexithymia. In contrast to the zero-order correlation, autistic traits were no longer significantly predictive of facial emotion recognition in human faces after controlling for alexithymia and potential confounds like age, frequency of social interaction, and frequency of anime and manga use. Additionally, alexithymia was more strongly and negatively correlated with performance on the human and anime facial emotion recognition tests than autistic traits, both with respect to their zero-order correlations and the hierarchical multiple regression models in which they controlled for each other. Finally, alexithymia fully mediated the relationship between autistic traits and both human and anime facial emotion recognition scores. We found no direct effect of autistic traits on emotion recognition when viewing either human or anime faces. However, there was a significant indirect effect of autistic traits on human and anime facial emotion recognition through alexithymia. These findings suggest that while the difficulty recognizing facial emotional expressions, whether in human or anime faces, characterizes individuals high in autistic traits, the overlapping subclinical trait of alexithymia is one possible source of this difficulty. Importantly, this difficulty is attenuated when non-human anime faces are viewed. Future research and clinical interventions might consider the role of alexithymia when it comes to helping autistic individuals with facial emotion recognition.
Issues in the measurement of autism spectrum and facial emotion recognition
While individuals recruited for the current study were not given a formal clinical evaluation or diagnosis, they completed the AQ-10 (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012), a short scale designed to measure autistic traits. Individuals with higher scores have more autistic traits, and those who score 6 or higher might be referred for an assessment to formally diagnose ASC but cannot be assumed to have such a diagnosis. Thus, a strength of our sample is that it consists of individuals from the community with varying degrees of autistic traits, allowing for more statistically powerful, robust, and generalizable tests of how individuals on the autism spectrum process facial emotional expressions. Indeed, our sample provided a good distribution of autistic traits as measured by the AQ-10, consistent with the idea that ASC are best conceptualized as the extreme end of a broader continuum of autistic traits (Allison et al., Reference Allison, Auyeung and Baron-Cohen2012; Baron-Cohen et al., Reference Baron-Cohen, Jolliffe, Mortimore and Robertson1997; Ruzich et al., Reference Ruzich, Allison, Smith, Watson, Auyeung, Ring and Baron-Cohen2016). Further supporting this idea, 40 (16%) of the 247 participants scored 6 or higher on the scale, suggesting that they might be referred for possible clinical diagnosis. Although recruiting individuals with a formal diagnosis of ASC may be applicable for some clinical or research purposes, it cannot account for variability on the autism spectrum or those individuals who may be on the autism spectrum but were never diagnosed. With the AQ-10, autistic individuals unaware of their condition would still be identified due to their heightened autistic traits.
The distributions of participants’ scores on both the human and anime facial emotion recognition tests suggest that they may be useful for future research. These two tests clearly distinguish those who can more easily recognize and identify facial emotional expressions from those who have more difficulty. Furthermore, their construct validity is evidenced by their strong and significant correlations with the measure of alexithymia and with each other. The mean score on the human version of the test was higher than the mean score on the anime version, which may reflect the fact that most individuals are less familiar with anime characters than with human characters. It may also reflect the use of human faces that were sourced from a collection of images that had been previously validated (Olszanowski et al., Reference Olszanowski, Pochwatko, Kuklinski, Scibor-Rylski, Lewinski and Ohme2015), whereas the anime faces used were found on Google Images by the first author. Future attempts to improve on the reliability and validity of these two facial emotion recognition tests would benefit from a more detailed item analysis of how participants performed with respect to the individual faces chosen (see e.g., Passarelli et al., Reference Passarelli, Masini, Bracco, Petrosino and Chiorri2018), but that endeavor is beyond the scope of this study.
Limitations and future directions
Contrary to two previous studies that found individuals with ASC performed better than those without ASC at recognizing emotions expressed in cartoon faces (Brosnan et al., Reference Brosnan, Johnson, Grawmeyer, Chapman and Benton2015; Cross et al., Reference Cross, Piovesan and Atherton2022), individuals higher in autistic traits in the present study were no better or worse at recognizing the facial expressions of anime characters. A similar lack of difference in performance between autistic and neurotypical individuals in facial emotion recognition when viewing cartoon faces was found in two other studies (Atherton & Cross, Reference Atherton and Cross2022; Rosset et al., Reference Rosset, Rondan, Da Fonseca, Santos, Assouline and Deruelle2008), one of which also recruited individuals varying in autistic traits rather than those diagnosed with ASC. Although we described it as a strength of our sample, it is also a limitation that we recruited participants from community populations rather than those who had been clinically diagnosed with ASC. A sample of individuals with ASC would have allowed more direct comparisons with much of the previous research on autism spectrum and facial emotion recognition, which has focused on clinical populations. Perhaps if the study were to have included participants clinically diagnosed with ASC instead of or in addition to participants with elevated autistic traits, our results would have been different. Importantly, findings that pertain to this sample of individuals varying in autistic traits cannot be generalized to individuals with ASC or those from clinical populations. We cannot make claims about individuals with ASC based on the present study.
Another major limitation that requires our cautious interpretation of findings is the use of the AQ-10 to measure autistic traits in this study. Because we used the short form instead of the full form of the Autism Spectrum Quotient (AQ; Baron-Cohen et al., Reference Baron-Cohen, Wheelwright, Skinner, Martin and Clubley2001), it is not possible to directly compare scores or results related to the AQ-10 from the present study to those using the full form AQ from some of the previous studies that helped provide the empirical basis for this work (Actis-Grosso et al., Reference Actis-Grosso, Bossi and Ricciardelli2015; Atherton & Cross, Reference Atherton and Cross2022; Poljac et al., Reference Poljac, Poljac and Wagemans2013). An additional concern with using the AQ-10 is that it showed low internal consistency as estimated with Cronbach’s alpha, both in our sample and in previous adult samples from the general population (Jia et al., Reference Jia, Steelman and Jia2019; Sizoo et al., Reference Sizoo, Horwitz, Teunisse, Kan, Vissers, Forceville, Van Voorst and Geurts2015; Taylor et al., Reference Taylor, Livingston, Clutterbuck and Shah2020). To be sure, the AQ-10 has been commonly used in research since it was developed, specifically as a more efficient and less time-consuming measure of autistic traits in the general population (e.g., Bertrams & Schlegel, Reference Bertrams and Schlegel2020; Forby et al., Reference Forby, Anderson, Cheng, Foulsham, Karstadt, Dawson, Pazhoohi and Kingstone2023; Gollwitzer et al., Reference Gollwitzer, Martel, McPartland and Bargh2019; Mason et al., Reference Mason, Ronald, Ambler, Caspi, Houts, Poulton, Ramrakha, Wertz, Moffitt and Happé2021; Pazhoohi et al., Reference Pazhoohi, Forby and Kingstone2021; Rudolph et al., Reference Rudolph, Lundin, Åhs, Dalman and Kosidou2018). We thus used the AQ-10 in this study because it had been used by other researchers and allowed us to substantially reduce the time it would take for participants to complete the study, an important consideration due to the limited amount of funding we had to compensate them. However, we must emphasize the limitations of the AQ-10 and strongly encourage caution when using it in future research, especially because it demonstrates low internal consistency. Results are potentially biased when measures with low internal consistency like the AQ-10 are used to predict constructs such as facial emotion recognition and compared to measures with much higher internal consistency like the TAS-R. We recommend that researchers use the full form AQ to avoid these issues with the short form AQ-10.
Similarly, the present study was limited by its use of the TAS-R to measure alexithymia. Although the TAS-R showed good reliability and validity in its initial development (Taylor et al., Reference Taylor, Bagby and Parker1992), it was revised and subsequently followed by the creation of the 20-item Toronto Alexithymia Scale (TAS-20; Bagby et al., Reference Bagby, Parker and Taylor1994), which had improved psychometric properties. In contrast to the TAS-R, this newer scale has also been much more commonly used to measure alexithymia in empirical research (Bagby et al., Reference Bagby, Parker and Taylor2020; Kooiman et al., Reference Kooiman, Spinhoven and Trijsburg2002), making it more difficult to draw comparisons between the results of this study and many previous studies of alexithymia measured with the TAS-20. Unlike the reasons that motivated our use of the AQ-10, the reason that we used the TAS-R rather than the more commonly used and psychometrically improved TAS-20 was unintentional: When adding the questions for alexithymia to the survey software that was used to create this study, the Taylor et al. (Reference Taylor, Bagby and Parker1992) article was accidentally open and referenced instead of the correct Bagby et al. (Reference Bagby, Parker and Taylor1994) article. Future research should employ the full form AQ and the TAS-20, perhaps in addition to other measures of autistic traits and alexithymia, to test whether our findings can be replicated and whether they are robust.
A final limitation of the study concerns the facial emotion recognition tests. As previously mentioned, the mean accuracy score for emotion recognition of human faces was higher than the mean accuracy score for emotion recognition of anime faces, although perfect scores were earned by participants in both conditions. Anime is not an especially common interest, and participants showed moderate variation on their frequency of anime or manga use. Thus, it is possible that participants found it more difficult on average to interpret the expressions of the anime stimuli, regardless of whether they are high on autistic traits or alexithymia, due to a lack of familiarity with this style of media. We controlled for frequency of anime or manga use in a series of hierarchical multiple regression models, however, and found no evidence that this frequency was relevant to the associations between autistic traits or alexithymia and anime facial emotion recognition. Alternatively, participants might have found the emotional expressions of anime faces more difficult on average because the anime stimuli were not previously validated like the human stimuli. While we were not able to include a pilot study to ensure that our selected images accurately depicted different facial emotional expressions, we hope that these results encourage future researchers to employ a more rigorous selection process for anime faces to be used in facial emotion recognition tasks.
To improve on future studies examining the facial emotion recognition of anime characters, it might be helpful to create a validated collection of anime faces with different emotional expressions, comparable to the Warsaw Set of Emotional Facial Expression Pictures (Olszanowski et al., Reference Olszanowski, Pochwatko, Kuklinski, Scibor-Rylski, Lewinski and Ohme2015). This project might involve commissioning one or more sufficiently skilled artists to draw multiple original anime characters expressing a variety of facial emotions. Alternatively, one could also generate stimuli of anime faces expressing different emotions through the responsible use of artificial intelligence. These images would then need to be examined by a panel of independent judges to determine their validity and effectiveness in conveying the different emotions. Further research could also investigate whether it would be preferable to have images drawn in a consistent visual style, or in different styles to help encompass the artistic variety found in anime.
The most recent data from the Centers for Disease Control indicate that one in 54 children in the United States are diagnosed with ASC (Maenner et al., Reference Maenner, Shaw, Baio, Washington, Patrick, DiRienzo, Christensen, Wiggins, Pettygrove, Andrews, Lopez, Hudson, Baroud, Schwenk, White, Rosenberg, Lee, Harrington, Huston and Dietz2020). Thus, it is important for research to examine what might affect or attenuate the social deficits experienced by those on the autism spectrum or with autistic traits, such that more effective interventions can be developed. The current findings inform future interventions in at least two ways. First, our data suggest that individuals on the autism spectrum might benefit from interventions that specifically target alexithymia, because this subclinical trait was implicated as a potential mechanism through which autistic individuals experience deficits in facial emotion recognition. Second, our data suggest that interventions directed at improving social–emotional functioning in autistic populations might consider the use of anime rather than human characters to improve accessibility and effectiveness.
Supplementary material
The supplementary material for this article can be found at https://dx.doi.org/10.1017/S0954579425000100.
Acknowledgements
This research was supported by funding from the Abington College Undergraduate Research Activities (ACURA). We thank Meghan M. Gillen for reviewing an early version of the manuscript and providing helpful feedback. The complete set of images depicting human and anime faces used in the present study can be found at the following link: https://osf.io/qyzgd.
Funding statement
This research was supported by funding from the ACURA.
Competing interests
The authors declare none.