We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fetal exposure to prenatal stress can increase risk for psychopathology but postnatal caregiving may offset risk. This study tests whether maternal sensitivity and the home environment during early childhood modify associations of prenatal stress with offspring behavior in a sample of 127 mother–child pairs (n = 127). Mothers reported on perceived stress during pregnancy. Maternal sensitivity was rated by coders during a parent–child free play task when children were 4 years old. One year later, mothers reported on the home environment, child internalizing and externalizing behaviors, and children completed an assessment of inhibitory control. As hypothesized, the early childhood caregiving environment modified associations of prenatal stress with child behavior. Specifically, prenatal stress was associated with more internalizing behaviors at lower levels of maternal sensitivity and in home environments that were lower in emotional support and cognitive stimulation, but not at mean or higher levels. Furthermore, prenatal stress was associated with lower inhibitory control only at lower levels of maternal sensitivity, but not at higher levels. Maternal sensitivity and an emotionally supportive and cognitively stimulating home environment in early childhood may be important factors that mitigate risk for mental health problems among children exposed to prenatal stress.
Inhibitory control plays an important role in children’s cognitive and socioemotional development, including their psychopathology. It has been established that contextual factors such as socioeconomic status (SES) and parents’ psychopathology are associated with children’s inhibitory control. However, the relations between the neural correlates of inhibitory control and contextual factors have been rarely examined in longitudinal studies. In the present study, we used both event-related potential (ERP) components and time-frequency measures of inhibitory control to evaluate the neural pathways between contextual factors, including prenatal SES and maternal psychopathology, and children’s behavioral and emotional problems in a large sample of children (N = 560; 51.75% females; Mage = 7.13 years; Rangeage = 4–11 years). Results showed that theta power, which was positively predicted by prenatal SES and was negatively related to children’s externalizing problems, mediated the longitudinal and negative relation between them. ERP amplitudes and latencies did not mediate the longitudinal association between prenatal risk factors (i.e., prenatal SES and maternal psychopathology) and children’s internalizing and externalizing problems. Our findings increase our understanding of the neural pathways linking early risk factors to children’s psychopathology.
The present study explored how individual differences and development of gray matter architecture in inferior frontal gyri (IFG), anterior cingulate (ACC), and inferior parietal lobe (IPL) relate to development of response inhibition as measured by both the Stop Signal Task (SST) and the Go/No-Go (GNG) task in a longitudinal sample of healthy adolescents and young adults. Reliability of behavioral and neural measures was also explored.
Participants and Methods:
A total of 145 individuals contributed data from the second through fifth timepoints of an accelerated longitudinal study focused on adolescent brain and behavioral development at the University of Minnesota. At baseline, participants were 9 to 23 years of age and were typically-developing. Assessment waves were spaced approximately 2 years apart. Behavioral measures of response inhibition collected at each assessment included GNG Commission Errors (CE) and the SST Stop Signal Reaction Time (SSRT). Structural T1 MRI scans were collected on a Siemens 3 T Tim Trio and processed with the longitudinal Freesurfer 6.0 pipeline to yield cortical thickness (CT) and surface area values. Regions of interest based on the Desikan-Killiany-Tourville atlas included IFG regions (pars opercularis (PO) and pars triangularis (PT)), ACC and IPL. The cuneus and global brain measures were evaluated as control regions. Retest stability of all measures was calculated using the psych package in R. Mixed linear effects modeling using the lme4 R package identified whether age-based trajectories for SSRTs and GNG CEs best fit linear, quadratic, or inverse curve. Then, disaggregated between- and within-subjects effects of regional cortical architecture measures were added to longitudinal behavioral models to identify individual differences and developmental effects, respectively.
Results:
Both response inhibition metrics demonstrated fair reliability and were best fit by an inverse age trajectory. Neural measures demonstrated excellent retest stability (all ICCs > 0.834). Age-based analyses of regional CT identified heterogeneous patterns of development, including linear trajectories for ACC and inverse age trajectories for bilateral PT. Individuals with thinner left PO showed worse performance on both response inhibition tasks. SSRTs were related to individual differences in right PO thickness and surface area. A developmental pattern was observed for right PT cortical thickness, where thinning over time was related to better GNG performance. Lower surface area of the right PT was related to worse GNG performance. No individual differences or developmental patterns were observed for the ACC, IPL, cuneus, or global metrics.
Conclusions:
This study examined the adolescent development of response inhibition and its association with cortical architecture in the IFG, ACC and IPL. Separate response inhibition tasks demonstrated similar developmental patterns with steepest improvements in early adolescence and relationships with left PO thickness, but each measure had unique relationships with other IFG regions. This study indicates that a region of the IFG, the par opercularis, relates to both individual difference and developmental change in response inhibition. These patterns suggest brain-behavior association that could be further explored in functional imaging studies and that may index, in vulnerable individuals, risk for psychopathology.
Many studies supported that children with autism spectrum disorder (ASD) have worse executive functions (EFs) when compared to typically developmental (TD) children in many domains, such as planning, flexibility, inhibition, and self-monitoring. The current study aims to use an adapted version of the computerized tower test to investigate the EFs of children with ASD. Furthermore, the researcher also assessed children's EFs-related behaviors in their schools using a teacher-filled behavior rating inventory of executive function, 2nd edition (BRIEF-2).
Participants and Methods:
61 Children aged 7 to 12 years old (M = 9.23) were included in the current study. 29 of them were in the ASD group, and 31 of them were in the TD group. All participants conducted an adapted computerized tower test. All participants' teachers completed BRIEF-2 to investigate their EFs-related behaviors in their schools.
Results:
The results indicated that there were no significant differences in the tower test between the ASD group and TD group in all indexes. Therefore, it implied that the current indexes might not be sensitive enough to distinguish whether a child has ASD or not. In addition, we further investigate the correlations between the tower test and the teacher-filled BRIEF-2. We found the different patterns in the two groups. In the ASD group, we found that the task-monitor index was positively correlated with total-number-of-rule-violations, total-complete-time, and total-rule-violations-per-item-ratio. The task-monitor index was negatively correlated with total-achievement-score, implying that poorer ability to monitor tasks leads to a longer completion time, more rule violations, and a lower total achievement score. Moreover, we also found a high correlation between the organization-of-materials in BRIEF-2 and total-complete-time in the tower test, suggesting the long problem-solving time in ASD groups is highly related to the disability of keeping working space ordered. In addition, we found that the shift index is positively correlated with total-complete-time and total-rule-violations-per-item-ratio. Hence, it indicates that those with poor flexibility in solving problems tend to need more time to complete tasks and violate more rules in ASD groups. In the TD group, we only found the correlation effects were significant between inhibition and self-monitor in the BRIEF-2 and the total-rule-violations-per-item-ratio in the tower test. It suggested that individuals with behavioral regulation problems, such as impulse control and monitoring problems are more likely to make rule violations. The result indicated that behavioral regulations play a more significant role in the TD group. In contrast, cognitive and emotional regulations are more critical in ASD children.
Conclusions:
Our findings found no significant difference in the computerized tower test between children with and without ASD, suggesting that the current indexed might not be sensitive enough to differentiate children with or without ASD. However, the results of the correlation between the tower test and teacher-filled BRIEF-2 showed that different patterns might be the cause of the EF performances between the two groups, indicating that there might be a different domain of EFs the children used in the tower test between two groups.
Therefore, further research could focus on developing new indexes in the Tower test and finding the EF mechanism of ASD children with different approaches.
Typical evaluations of adult ADHD consist of behavior self-report rating scales, cognitive or intellectual functioning measures, and specific measures designed to measure attention. Boone (2009) suggested monitoring continuous effort is essential throughout psychological assessments. However, very few research studies have contributed to malingering literature on the ADHD population. Many studies have reported the adequate use of symptom validity tests, which assess effortful performance in ADHD evaluations (Jasinski et al., 2011; Sollman et al., 2010; Schneider et al., 2014). Because of the length of ADHD assessments, individuals are likely to become weary and tired, thus impacting their performance. This study investigates the eye movement strategies used by a clinical ADHD population, non-ADHD subjects, and malingering simulators when playing a common simple visual search task.
Participants and Methods:
A total of 153 college students participated in this study. To be placed in the ADHD group, a participant must endorse four or more symptoms on the ASRS (N = 37). To be placed in the non-ADHD, participants should have endorsed no ADHD symptoms (N = 43). Participants that did not meet the above criteria for ADHD and not-ADHD were placed in an Indeterminate group and were not included in the analysis. A total of 20 participants were instructed to fake symptoms related to ADHD during the session. A total of twelve Spot the Difference images were used as the visual picture stimuli. Sticky by Tobii Pro (2020) was used for the collection of eye-movement data was utilized. Sticky by Tobii Pro is an online self-service platform that combines online survey questions with an eye-tracking webcam, allowing participants to see images from their home computers.
Results:
Results indicated on the participants classified as Malingering had a significantly Visit Count (M = 17.16; SD= 4.99) compared to the ADHD(M = 12.53; SD= 43.92) and not-ADHD groups (M =11.51; SD=3.23). Results also indicated a statistically significant Area Under the Curve (AUC) = .784; SE = .067; p -.003; 95% CI = .652-.916. Optimal cutoffs suggest a Sensitivity of 50% with a False Positive Rate of 10%.
Conclusions:
Results indicated that eye-tracking technology could help differentiate simulator malingerers from non-malingerers with ADHD. Eye-tracking research’ relates to a patchwork of fields more diverse than the study of perceptual systems. Due to their close relation to attentional mechanisms, the study’s results can provide an insight into cognitive processes related to malingering performance.
Individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in reward-based learning, which have important implications for behavioral regulation. Prior research has shown that these individuals show altered patterns of risky decision-making, which may be partially explained as a function of dysfunctional reactivity to rewards and punishments. However, research findings on the relationships between ADHD and punishment sensitivity have been mixed. The current study used the Balloon Analog Risk Task (BART) to examine risky decision-making in adults with and without ADHD, with a particular interest in characterizing the manner in which participants react to loss.
Participants and Methods:
612 individuals (Mage = 31.04, SDage = 78.77; 329 females, 283 males) were recruited through the UCLA Consortium for Neuropsychiatric Phenomics (CNP). All participants were administered the Structured Clinical Interview for DSM-IV-TR (SCID-IV), which provided diagnoses used for group comparisons between adults with ADHD (n = 35) and healthy controls (n = 577). A computerized BART paradigm was used to examine impulsivity and risky decision-making, while participants also completed the Barratt Impulsiveness Scale (BIS-11), and ADHD participants completed the Adult Self-Report Scale-V1.1 (ASRS-V1.1). The BART presented two colors of balloons with differing probabilities of exploding, and participants were incentivized to pump the balloons as many times as possible without causing them to explode. The primary endpoint was "mean adjusted pumps", determined as mean across trials of the number of pumps on trials that did not end in explosion. An index of reactivity to loss was calculated as the difference between the mean adjusted pumps following an explosion and the mean adjusted pumps following trials in which the balloon did not explode.
Results:
The ADHD and control groups did not differ on mean adjusted pumps across trials, but they did differ in their reactivity to explosion of balloons that followed the most pumps, incurring the greatest level of loss (F(1, 551) = 7.1, p < 0.01). Interestingly, ADHD participants showed a greater reactivity to loss on these balloons than controls (p < 0.05), indicating that they reduced their number of pumps following balloon explosions more than controls. For participants as a whole, there were small correlations between loss reactivity and scales of everyday impulsivity on the BIS-II (ps < 0.05). For ADHD participants, loss reactivity was unrelated to symptoms of inattention but was significantly correlated with symptoms of hyperactivity/impulsivity (p = 0.01) and total ADHD symptoms (p < 0.05) on the ASRS-V1.1.
Conclusions:
In the context of a risky decision-making task, adults with ADHD showed greater reactivity to loss than controls, despite showing comparable patterns of overall performance during the BART. The magnitude of behavioral adjustment following loss was correlated with symptoms of hyperactivity/impulsivity in adults with ADHD, suggesting that loss sensitivity is clinically related to impulsive behavior in everyday life. These findings help to expand our understanding of motivational processing in ADHD and suggest new insight into the ways in which everyday symptoms of ADHD are related to sensitivity to losses and punishments.
Sleep is a restorative function that supports various aspects of well-being, including cognitive function. College students, especially females, report getting less sleep than recommended and report more irregular sleep patterns than their male counterparts. Inadequate and irregular sleep are associated with neuropsychological deficits including more impulsive responding in lab-based tasks. Although many lab-based experiments ask participants to report their sleep patterns, few studies have analyzed how potential changes in sleep affect their findings. Utilizing data from a previously collected study, this study aims to investigate relations between sleep (i.e., sleep duration and changes in sleep duration) and performance-based measures of inhibition among female college students.
Participants and Methods:
Participants (n = 39) were majority first-year students (Mage =19.27) and Caucasian (51%). Participants were recruited to participate in a larger study exploring how food commercials affect inhibitory control. Participants were randomized to each study condition (watching a food or non-food commercial) over two visits to the lab (T1 and T2). During both visits, they completed questionnaires asking about their 1) sleep duration the night before and 2) their “typical” sleep duration to capture changes in sleep duration. They also completed a computer-based stop signal task (SST) which required them to correctly identify healthy food images (stop signal accuracy [SSA] healthy) and unhealthy food images (SSA unhealthy) while inhibiting their response during a stop signal delay (SSD) which became increasingly more difficult (or delayed) as they successfully progressed. Since the main aim of the study was to explore the impact of sleep, analyses controlled for study condition. Analyses involving changes in sleep also accounted for sleep duration the night before the study visit.
Results:
On average, students reported being under slept the night before the lab visit, reporting that they got 38 minutes less sleep than their “typical” sleep (7 hrs 3 min). Hierarchical regression analyses demonstrated that sleep duration the night before the lab visit was not associated with inhibition (i.e., SSA unhealthy, SSA healthy, SSD). In contrast, a greater change in sleep, or getting less sleep than “typical,” was associated with worsened inhibition across inhibition variables (SSA healthy, SSA unhealthy, SSD) above and beyond sleep duration at T1. At T2, only one analysis remained significant, such that getting less sleep than “typical” was associated with lower accuracy of appropriately identifying unhealthy images (SSA unhealthy) whereas other analyses only approached statistical significance.
Conclusions:
These findings suggest that changes in sleep, or getting less sleep than typical, may impact inhibition performance measured in a lab, even when accounting for how much sleep they got the night before. Specifically, getting less sleep than typical was associated with reduced accuracy in selecting unhealthy images, a finding that was consistent across two visits to the lab. These preliminary findings offer opportunities for lab-based experiments to investigate the role of sleep when measuring inhibition performance. Further, clinicians conducting neuropsychological assessments in clinical settings may benefit from assessing sleep the night before the appointment and determine if this represents a change from their typical sleep pattern.
Rapid neurodevelopment occurs during adolescence, which may increase the developing brain’s susceptibility to environmental risk and resilience factors. Adverse childhood experiences (ACEs) may confer additional risk to the developing brain, where ACEs have been linked with alterations in BOLD signaling in brain regions underlying inhibitory control. Potential resiliency factors, like a positive family environment, may attenuate the risk associated with ACEs, but limited research has examined potential buffers to adversity’s impact on the developing brain. The current study aimed to examine how ACEs relate to BOLD response during successful inhibition on the Stop Signal Task (SST) in regions underlying inhibitory control from late childhood to early adolescence and will assess whether aspects of the family environment moderate this relationship.
Participants and Methods:
Participants (N= 9,080; Mage= 10.7, range= 9-13.8 years old; 48.5% female, 70.1% non-Hispanic White) were drawn from the larger Adolescent Brain Cognitive Development (ABCD) Study cohort. ACE risk scores were created (by EAS) using parent and child reports of youth’s exposure to adverse experiences collected at baseline to 2-year follow-up. For family environment, levels of family conflict were assessed based on youth reports on the Family Environment Scale at baseline and 2-year follow-up. The SST, a task-based fMRI paradigm, was used to measure inhibitory control (contrast: correct stop > correct go); the task was administered at baseline and 2-year follow-up. Participants were excluded if flagged for poor task performance. ROIs included left and right dorsolateral prefrontal cortex, anterior cingulate cortex, anterior insula, inferior frontal gyrus (IFG), and pre-supplementary motor area (pre-SMA). Separate linear mixed-effects models were conducted to assess the relationship between ACEs and BOLD signaling in ROIs while controlling for demographics (age, sex assigned at birth, race, ethnicity, household income, parental education), internalizing scores, and random effects of subject and MRI model.
Results:
Greater ACEs was associated with reduced BOLD response in the opercular region of the right IFG (b= -0.002, p= .02) and left (b= -0.002, p= .01) and right pre-SMA (b= -0.002, p= .01). Family conflict was related to altered activation patterns in the left pre-SMA, where youth with lower family conflict demonstrated a more robust negative relationship (b=.001, p= .04). ACEs were not a significant predictor in other ROIs, and the relationship between ACEs and BOLD response did not significantly differ across time. Follow-up brain-behavior correlations showed that in youth with lower ACEs, there was a negative correlation between increased activation in the pre-SMA and less impulsive behaviors.
Conclusions:
Preadolescents with ACE history show blunted activation in regions underlying inhibitory control, which may increase the risk for future poorer inhibitory control with downstream implications for behavioral/health outcomes. Further, results demonstrate preliminary evidence for the family environment’s contributions to brain health. Future work is needed to examine other resiliency factors that may modulate the impact of ACE exposure during childhood and adolescence. Further, clinical scientists should continue to examine the relationship between ACEs and neural and behavioral correlates of inhibitory control across adolescent development, as risk-taking behaviors progress.
Inhibitory control impairment is highly prevalent following traumatic brain injury (TBI). There have not been any empirical investigations into whether this could explain social disinhibition following severe TBI, i.e. socially inappropriate behaviour of verbal, physical or sexual nature. Further, social context has proven to be important in studying social disinhibition and using a social version of an established task for the assessment of inhibitory control may provide a new perspective. Therefore, the objectives of this research study were to investigate the role of inhibitory control impairment in social disinhibition following severe TBI, using a social and a non-social task. We hypothesized that people with TBI and clinical levels of social disinhibition would perform worse on both task versions, when compared to those with low disinhibition levels. Further, we hypothesized that participants high on social disinhibition would perform worse on the social, when compared to the non-social version.
Participants and Methods:
We conducted a between-group comparative study. Twenty-six adult participants with severe TBI were matched with 27 adult, healthy controls based on gender, age and education. Frontal Systems Behavior Scale and Social Disinhibition Interview were used to assess social disinhibition. A computerized task based on the cued go/no-go paradigm was used to assess inhibitory control. We included two versions of this task – a coloured (non-social) Go/No-Go with different colored rectangles, and an emotional (social) Go/No-Go with emotional faces serving as ‘go‘ and ‘no-go‘ cues. Two-way mixed ANCOVAs were used to test between-group differences in errors of commission and response speed.
Results:
Unexpectedly, the TBI and the control group did not significantly differ on their levels of depression, anxiety, stress, or their level of social disinhibition. Overall, participants were slower (F(1,47) = 15.212, p<.001, ηp2 = .245) and made more errors of commission on no-go trials (F(1,44) = 11.560, p = .001, ηp2 = .208) on the social Go/No-Go task. There was no main effect of participants‘ brain injury status on errors of commission on no-go trials or mean reaction times. When categorized based on disinhibition level (high vs low), participants in the highdisinhibition group made more errors on the social task (F(1,41) = 4.095, p = .050, ηp2 = .091) than those in the low-disinhibition group, and more errors on the social, compared to the non-social task (task-group interaction (F(1,41) = 7.233, p = .010, ηp2 = .150)).
Conclusions:
Based on these initial results, social disinhibition is associated with inhibitory control impairment, although this is only evident when a social inhibitory control task is used for assessment. We did not find any relationship between social disinhibition and the speed with which people react to stimuli. The results of this study add to the conceptualization of social disinhibition that is commonly present after severe TBI.
The current study had two primary objectives: 1) To assess the dose-response relationship between acute bouts of aerobic exercise intensity and performance in multiple cognitive domains (episodic memory, attention, and executive function) and 2) To replicate and extend the literature by examining the dose-response relationship between aerobic exercise intensity and pattern separation.
Participants and Methods:
18 young adults (mean age = 21.6, sd = 2.6; mean education = 13.9, sd = 3.4; 50% female) were recruited from The Ohio State University and surrounding area (Columbus, OH). Participants completed control (no exercise), light intensity, and vigorous intensity exercise conditions across three counterbalanced appointments. For each participant, all three appointments occurred at approximately the same time of day with at least 2 days between appointments. Following the rest or exercise conditions and after an approximately 7 minute delay, participants completed a Mnemonic Similarity Task (MST; Stark et al., 2019) to assess pattern separation. This task was always administered first as we attempted to replicate previous studies and further clarify the relationship between acute bouts of aerobic exercise and pattern separation by implementing an exercise stimulus that varied in intensity. After the MST, three brief cognitive tasks (roughly 5 min each) were administered in a counterbalanced order: a gradual-onset continuous performance task (gradCPT; Esterman et al., 2013), the flanker task from the NIH toolbox, and a face-name episodic memory task. Here we report results from the gradCPT, which assesses sustained attention and inhibitory control. Heart rate and ratings of perceived exertion were collected to validate the rest and exercise conditions. Repeated-measures ANOVAs were used to assess the relationship between exercise condition and dependent measures of sustained attention and inhibitory control and pattern separation.
Results:
One-way repeated-measures ANOVAs revealed a main effect of exercise condition on gradCPT task performance for task discrimination ability (d') and commission error rate (p’s < .05). Pairwise comparisons revealed task discrimination ability was significantly higher following the light intensity exercise condition versus the control condition. Commission error rate was significantly lower for both the light and vigorous exercise conditions compared to the control condition. For the MST, two-way repeated-measures ANOVAs revealed an expected significant main effect of lure similarity on task performance; however, there was not a significant main effect of exercise intensity on task performance (or a significant interaction).
Conclusions:
The current study indicated that acute bouts of exercise improve both sustained attention and inhibitory control as measured with the gradCPT. We did not replicate previous work reporting that acute bouts of exercise improve pattern separation in young adults. Our results further indicate that vigorous exercise did not detrimentally impact or improve pattern separation performance. Our results indicate that light intensity exercise is sufficient to enhance sustained attention and inhibitory control, as there were no significant differences in performance following light versus vigorous exercise.
Recent findings suggest that brief dialectical behavior therapy (DBT) for borderline personality disorder is effective for reducing self-harm, but it remains unknown which patients are likely to improve in brief v. 12 months of DBT. Research is needed to identify patient characteristics that moderate outcomes. Here, we characterized changes in cognition across brief DBT (DBT-6) v. a standard 12-month course (DBT-12) and examined whether cognition predicted self-harm outcomes in each arm.
Methods
In this secondary analysis of 240 participants in the FASTER study (NCT02387736), cognitive measures were administered at pre-treatment, after 6 months, and at 12 months. Self-harm was assessed from pre-treatment to 2-year follow-up. Multilevel models characterized changes in cognition across treatment. Generalized estimating equations examined whether pre-treatment cognitive performance predicted self-harm outcomes in each arm.
Results
Cognitive performance improved in both arms after 6 months of treatment, with no between-arm differences at 12-months. Pre-treatment inhibitory control was associated with different self-harm outcomes in DBT-6 v. DBT-12. For participants with average inhibitory control, self-harm outcomes were significantly better when assigned to DBT-12, relative to DBT-6, at 9–18 months after initiating treatment. In contrast, participants with poor inhibitory control showed better self-harm outcomes when assigned to brief DBT-6 v. DBT-12, at 12–24 months after initiating treatment.
Conclusions
This work represents an initial step toward an improved understanding of patient profiles that are best suited to briefer v. standard 12 months of DBT, but observed effects should be replicated in a waitlist-controlled study to confirm that they were treatment-specific.
In Chapter 6, we elaborate on the difficulties that may arise in perspective taking. These depend on individual, contextual, and textual variables. Among the individual factors are motivation, cognitive skills and capacities, and empathic dispositions. Variations in the situation and context, such as available information and feedback, can affect perspective taking. Specific difficulties include: the failure to identify relevant personal knowledge and experience, inconsistent and conflicting perspectives, problems reconciling a character’s perspective with the reader’s own evaluations, and/or the relationship of the reader’s cultural background to that of the character. Subtleties in the text, such as multiple perspectives, unreliable perspectives, and multiple perspective-taking targets, pose their own challenges.
Text comprehension frequently demands the resolution of no longer plausible interpretations to build an accurate situation model, an ability that might be especially challenging during second language comprehension. Twenty-two native English speakers (L1) and twenty-two highly proficient non-native English speakers (L2) were presented with short narratives in English. Each text required the evaluation and revision of an initial prediction. Eye movements in the text and a comprehension sentence indicated less efficient performance in the L2 than in L1 comprehension, in both inferential evaluation and revision. Interestingly, these effects were determined by individual differences in inhibitory control and linguistic proficiency. Higher inhibitory control reduced the time rereading previous parts of the text (better evaluation) as well as revisiting the text before answering the sentence (better revision) in L2 comprehenders, whereas higher proficiency reduced the time in the sentence when the story was coherent, suggesting better general comprehension in both languages.
Inhibitory control develops in early childhood, and atypical development may be a measurable marker of risk for the later development of psychosis. Additionally, inhibitory control may be a target for intervention.
Methods
Behavioral performance on a developmentally appropriate Go/No-Go task including a frustration manipulation completed by children ages 3–5 years (early childhood; n = 107) was examined in relation to psychotic-like experiences (PLEs; ‘tween’; ages 9–12), internalizing symptoms, and externalizing symptoms self-reported at long-term follow-up (pre-adolescence; ages 8–11). ERP N200 amplitude for a subset of these children (n = 34) with electrophysiological data during the task was examined as an index of inhibitory control.
Results
Children with lower accuracy on No-Go trials compared to Go trials in early childhood (F(1,101) = 3.976, p = 0.049), evidenced higher PLEs at the transition to adolescence 4–9 years later, reflecting a specific deficit in inhibitory control. No association was observed with internalizing or externalizing symptoms. Decreased accuracy during the frustration manipulation predicted higher internalizing, F(2,202) = 5.618, p = 0.004, and externalizing symptoms, F(2,202) = 4.663, p = 0.010. Smaller N200 amplitudes were observed on No-Go trials for those with higher PLEs, F(1,101) = 6.075, p = 0.020; no relationship was observed for internalizing or externalizing symptoms.
Conclusions
Long-term follow-up demonstrates for the first time a specific deficit in inhibitory control behaviorally and electrophysiology, for individuals who later report more PLEs. Decreases in task performance under frustration induction indicated risk for internalizing and externalizing symptoms. These findings suggest that pathophysiological mechanisms for psychosis are relevant and discriminable in early childhood, and further, suggest an identifiable and potentially modifiable target for early intervention.
Recent studies suggest that heterogeneous bilingual experiences implicate different executive functions (EF) in children. Using a latent profile analysis, we conducted a more nuanced investigation of multifaceted bilingual experiences. By concurrently considering numerous bilingual indicators – age of L1 and L2 acquisition, interactional contexts of verbal exchanges, L1 and L2 proficiency, balance of language use at home and school, and receptive vocabulary – we identified three latent profiles (subgroups): balanced dual-language, dominant single-language, and mixed-interaction. We found that the balanced dual-language and dominant single-language profiles predicted significantly better switching than the mixed-interaction profile. However, no profile differences were found in working memory, prepotent response inhibition, or inhibitory control. These results held true when multiple covariates (age, sex, household income, and nonverbal intelligence) were controlled for. Using a person-centered approach, our study underscores that disparate bilingual experiences asymmetrically predict the shifting facet of EF during early childhood.
English is imposed as the language of instruction in multiple linguistically diverse societies where there is more than one official language. This might have negative educational consequences for people whose first language (L1) is not English. To investigate this, 47 South Africans with advanced English proficiency but different L1s (L1-English vs. L1-Zulu) were evaluated in their listening comprehension ability. Specifically, participants listened to narrative texts in English which prompted an initial inference followed by a sentence containing an expected inference or an unexpected but plausible concept, assessing comprehension monitoring. A final question containing congruent or incongruent information in relation to the text information followed, assessing the revision process. L1-English participants were more efficient at monitoring and revising their listening comprehension. Furthermore, individual differences in inhibitory control were associated with differences in revision. Results show that participants’ L1 appears to supersede their advanced English proficiency on highly complex listening comprehension.
To examine how executive functioning (EF) relates to academic achievement longitudinally in children with neurofibromatosis type 1 (NF1) and plexiform neurofibromas (PNs) and whether age at baseline moderates this relationship.
Method:
Participants included 88 children with NF1 and PNs (ages 6–18 years old, M = 12.05, SD = 3.62, 50 males) enrolled in a natural history study. Neuropsychological assessments were administered three times over 6 years. EF (working memory, inhibitory control, cognitive flexibility, and attention) was assessed by performance-based (PB) and parent-reported (PR) measures. Multilevel growth modeling was used to examine how EF at baseline related to initial levels and changes in broad math, reading, and writing across time, controlling for demographic variables.
Results:
The relationship between EF and academic achievement varied across EF and academic domains. Cognitive flexibility (PB) uniquely explained more variances in initial math, reading, and writing scores; working memory (PB) uniquely explained more variances in initial levels of reading and writing. The associations between EF and academic achievement tended to remain consistent across age groups with one exception: Lower initial levels of inhibitory control (PR) were related to a greater decline in reading scores. This pattern was more evident among younger (versus older) children.
Conclusions:
Findings emphasize the heterogeneous nature of academic development in NF1 and that EF skills could help explain the within-group variability in this population. Routine cognitive/academic monitoring via comprehensive assessments and early targeted treatments consisting of medication and/or systematic cognitive interventions are important to evaluate for improving academic performance in children with NF1 and PNs.
Sentences that have more than one possible meaning are said to be syntactically ambiguous (SA). Because the correct interpretation of these sentences can be unclear, resolving SA sentences can be cognitively demanding for children, particularly with regards to inhibitory control (IC). In this study we provide three lines of evidence supporting the importance of IC in SA resolution. First, we show that children with higher IC resolve more SA sentences correctly. Second, we show that SA resolution is worse on tasks that place higher demands on IC, even for children with high IC. Third, we show that children with higher IC make different types of SA errors than children with lower IC. This study expands understanding of the cognitive skills underlying language and suggests a need to consider task demands on IC when developing educational curriculums.
Recent initiatives have focused on integrating transdiagnostic biobehavioral processes or dispositions with dimensional models of psychopathology. Toward this goal, biobehavioral traits of affiliative capacity (AFF) and inhibitory control (INH) hold particular promise as they demonstrate transdiagnostic stability and predictive validity across developmental stages and differing measurement modalities. The current study employed data from different modes of measurement in a sample of 1830 children aged 5–10 years to test for associations of AFF and INH, individually and interactively, with broad dimensions of psychopathology. Low AFF, assessed via parent-report, evidenced predictive relations with distress- and externalizing-related problems. INH as assessed by cognitive-task performance did not relate itself to either psychopathology dimension, but it moderated the effects observed for low AFF, such that high INH protected against distress symptoms in low-AFF participants, whereas low INH amplified distress and externalizing symptoms in low-AFF participants. Results are discussed in the context of the interface of general trait transdiagnostic risk factors with quantitatively derived dimensional models of psychopathology.
“Subsyndromal” obsessive-compulsive disorder symptoms (OCDSs) are common and cause impaired psychosocial functioning. OCDSs are better captured by dimensional models of psychopathology, as opposed to categorical diagnoses. However, such dimensional approaches require a deep understanding of the underlying neurocognitive drivers and impulsive and compulsive traits (ie, neurocognitive phenotypes) across symptoms. This study investigated inhibitory control and self-monitoring across impulsivity, compulsivity, and their interaction in individuals (n = 40) experiencing mild–moderate OCDSs.
Methods
EEG recording concurrent with the stop-signal task was used to elicit event-related potentials (ERPs) indexing inhibitory control (ie, N2 and P3) and self-monitoring (ie, error-related negativity and correct-related negativity (CRN): negativity following erroneous or correct responses, respectively).
Results
During unsuccessful stopping, individuals high in both impulsivity and compulsivity displayed enhanced N2 amplitude, indicative of conflict between the urge to respond and need to stop (F(3, 33) = 1.48, P < .05, 95% Cl [−0.01, 0.001]). Individuals high in compulsivity and low in impulsivity showed reduced P3 amplitude, consistent with impairments in monitoring failed inhibitory control (F(3, 24) = 2.033, P < .05, 95% CI [−0.002, 0.045]). Following successful stopping, high compulsivity (independent of impulsivity) was associated with lower CRN amplitude, reflecting hypo-monitoring of correct responses (F(4, 32) = 4.76, P < .05, 95% CI [0.01, 0.02]), and with greater OCDS severity (F(3, 36) = 3.32, P < .05, 95% CI [0.03, 0.19]).
Conclusion
The current findings provide evidence for differential, ERP-indexed inhibitory control and self-monitoring profiles across impulsive and compulsive phenotypes in OCDSs.