Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-01-10T21:42:20.800Z Has data issue: false hasContentIssue false

Chapter 5 - Comparing L1 and L2 Production in the Trinity Lancaster Corpora

Published online by Cambridge University Press:  09 January 2025

Tony McEnery
Affiliation:
Lancaster University
Isobelle Clarke
Affiliation:
Lancaster University
Gavin Brookes
Affiliation:
Lancaster University

Summary

This chapter shifts focus to consider to what extent the behaviours viewed in Chapters 3 and 4 were unique to learners. This is achieved by using a new corpus, the TLC L1 corpus, which is composed of the same exam as in the TLC corpus. However, in this case it is L1 speakers sitting the exam. This allows us to see an overlap between the discourse unit functions selected by L1 speakers undertaking the same tasks as the L2 speakers. The role of micro-structural features, specifically grammatical features, in forming similarities and differences between the two sets of examinees (L1 and L2 speakers) is considered. As part of this, the chapter focuses in on four particular grammatical features – demonstrative determiners, numeral nouns, passives and relative clauses – which seem to link discourse unit to proficiency in the TLC to the extent that they generate differences between discourse unit functions when the TLC and TLC L1 are compared. The chapter also considers, however, the normative nature of the analysis undertaken and notes that individual learners’ performance may vary from the norms examined.

Type
Chapter
Information
Learner Language, Discourse and Interaction
A Corpus-Based Analysis of Spoken English
, pp. 133 - 159
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

5.1 Introduction

Chapters 3 and 4 worked to show how, at the level of macro-structure, the discourse units in the TLC could be classified according to function derived from a form-to-function relationship. This was achieved using short-text MDA. Chapter 4 concluded by considering how socio-pragmatics was closely tied to the selection of macro-structures in the context of the exam on which the corpus is based. It also showed how micro- and macro-structures in discourse could converge and diverge, functionally. However, in the absence of additional evidence, it is difficult to begin to test some hypotheses. For example, the extent to which the L2 speakers’ performance in the data was a product of their training, as opposed to being strictly dictated by the needs of tasks, is unclear. Also, the degree to which the L2 performance in the conversations is at all authentic remains unclear – if we judge authenticity to be what the exam in question offers, L1-like performance governed by the cooperative principle, then we only have the examiner’s mark as a guide on which to base any judgement of the student’s performance against that benchmark. In this chapter we introduce fresh data to provide better insight into such questions. That data comprises the same tasks but undertaken not by L2 speakers trained for the exam, but by L1 speakers who are not trained for the exam. These L1 speakers respond to the tasks in the exam simply on the basis of their ability to engage in the task as L1 speakers. The corpus in question is the TLC L1.

5.2 Trinity Lancaster Corpus L1

In the analysis so far, proficiency has been a key point in most of the discussions. As speakers develop proficiency across grades, or they demonstrate greater proficiency than their fellow students within a grade, we see the patterns of functions in use shift. We see that as students master functions, the focus of the interaction shifts towards them (as shown in the discussion of Dimension 4 in Chapter 4) or we see them use a function and, as proficiency develops, their use of that function becomes more pronounced (as happens with students scored A in the Discussion task in Dimension 2, see Table 3.4 in Chapter 3). With reference to the examiner, we see their use of discourse functions, at the turn and at the discourse unit level, adapt as the student increases in their proficiency – at lower levels of proficiency certain functions are strongly indicative of the examiner scaffolding the student’s performance, for example (see the Realis function used by D grade students, as discussed in Section 3.4). Yet it remains unclear as to whether this so-called development is indicative of increased proficiency levels or improved familiarisation with examination constructs. A reasonable way of clarifying this issue is to study the version of the TLC in which the exam is taken by L1 British English speakers. These speakers were never trained to take the exam, so their performance is quite independent of any knowledge of the syllabus set for the exams. As noted already, their only qualification and preparation is that they are L1 British English speakers. If we look at the use of functions by this group, do we get a sense that they perform in a similar way to the L2 speakers; that is, while they may be viewed via the optic of the construct in terms of assessment, may the behaviours they display be assumed to be those which an L1 speaker would display given the tasks in the construct? If those behaviours are similar to those of the L2 speakers taking the test, then we might argue that the behaviours we are seeing from the L2 speakers are indicative of the development of proficiency; that is, if the performance of high-proficiency (e.g. students taking the grade 8 exam) L2 speakers looks more like L1 speaker performance than the L2 speakers taking a lower grade of exam, then the goal of the examination – to test the progress of L2 speakers towards a target variety – may be said to have been met.Footnote 1

The first thing that is apparent in studying the L1 and the L2 Trinity exams as a whole is that there is some evidence, in a sense, of greater diversity in the range of linguistic features being used by the L1 speakers. If we consider grammatical features which appear in more than 5 per cent of the turns in the turn-level short-text MDA of the examinee turns in each corpus, then we find a larger number of such features in the L1 data (60 features out of 120 annotated) as opposed to the L2 data (42 features out of 120 annotated). However, before concluding that this is evidence of a clear difference in proficiency, which undermines the goal of the comparison, we need to pause and consider what is being compared. A result like this should be expected as the L1 speakers are all taking a higher grade exam than the L2 speakers, as discussed in Chapter 1, because the exam was assumed to represent one which an L2 speaker whose proficiency was a close match to that of an L1 speaker would take. So the difference observed is, prima facie, reassuring. It shows that the difference in proficiency that may be assumed as the grade of exam increases (from 6/7/8 to 12 in this case) seems apparent from the very first comparison of the data. Figure 5.1 shows the features which are present in the L1 examinees’ data above the 5 per cent threshold, but not in the L2 examinees’.

Figure 5.1 Features appearing in 5 per cent or more of the L1 turns which do not appear in 5 per cent or more of the L2 turns.

Indeed, this use of a greater diversity of linguistic features as proficiency increases is also observed between the grades in the L2 data. If we compare the features that occur in more than 5 per cent of the turns of the examinees at each grade level, grade 6 learners use thirty-eight linguistic features, grade 7 learners use forty-five linguistic features, while grade 8 learners use forty-eight linguistic features.

This broadening of grammatical features in frequent use can lead to a very large number of grammatical features coming together to form a function. A good example of this is in Dimension 1 of the turn-level analysis of the examinee data in the TLC L1. As usual, this dimension relates to long and short turns. However, in the L1 data this distinction takes on a coherent function – the positive side of the dimension is linked to Elaborated Speech, often utterances in which the examiner takes long turns in order to produce explanations or present an argument. This is opposed to the negative Dimension 1 function of Minimal Response – this is typically in the form of backchannels. However, Elaborated Speech is composed of a very large number of grammatical features – fifty-one features cluster to characterise this function.Footnote 2 Accordingly, in the turn-based short-text MDA of the Trinity L1 material in Table 5.1, we do not list the features present for each dimension, noting instead the number of features present for each side of the dimension. In the table, where a function has a direct or close equivalent in the TLC examinee turn-based short-text MDA, it is marked with an asterisk and the number of features forming that function in L2 speech is noted in parentheses. As in earlier discussions, the long and short distinction will not be explored further here.

Table 5.1 Functions at turn level in the TLC L1 in tasks shared between L2 (grades 6–8) and L1 speakers (grade 12).

DimensionLabelNumber of features linked to the function
Dim. 1 +veElaborated speech51
Dim. 1 −veMinimal responseNone (characterised by absences only)
Dim. 2 +veInformational elaboration*14 (11)
Dim. 2 −veInvolved*13 (9)
Dim. 3 +veElaborated discussions43
Dim. 3 −veBrief attitudinal descriptions11
Dim. 4 +veEvaluation17
Dim. 4 −veIllustration11
Dim. 5 +veInformation seeking*13 (7)
Dim. 5 −veNarrative*12 (11)
Dim. 6 +veAppraisal10
Dim. 6 −veClarification8

If we consider the shared functions, we can explore the extent to which there is progression, in terms of the features constituting the function, when we compare L2 and L1 examinee speech at the micro-structural level. A finding consistent with the discussion so far is evident in Table 5.1 – the functions shared between L2 and L1 speakers are formed from a smaller repertoire of frequently used features in the L2 speech when compared with L1. Of course, the question arises of the degree to which the L1 speech represents an elaboration of the same function as the L2 speech – may it be that the speech, while functionally similar, is actually distinct in terms of form between the L2 and L1 speakers; for example, are a different set of grammatical items used to realise the Narrative function at turn level for L2 speakers compared to L1 speakers producing the same function?

The answer is generally ‘no’ – the L1 function is composed of a wider range of features than the L2 speech, but the features used by the L2 speakers are generally present in the speech of L1 speakers within the same function. For example, consider Informational Elaboration. This is closely aligned to negative Dimension 2 in the analysis of L2 turns, the Informational function. If we compare L2 and L1 speech, we see that of the six features which combine to produce this function in L2 speech, six are also involved in the production of this function for L1 examinees. These features are coordinating conjunction, definite article, general determiner, general noun, preposition and proper noun. So nearly half of the features which constitute the Informational Elaboration function for L1 examinees are also involved in producing a similar function for L2 examinees. Of the features whose absence is part of the cluster of features identified in Section 2.4.2 as constituting this function for L2 speakers, none of them are features whose presence defines the function for L1 speakers. Rather, the features which define this function for L1 speakers, but not L2 speakers, are the presence of attributive adjectives, demonstrative determiners, nominalisations, numeral nouns, passives, past tense, possession and relative clauses. If we look at that set of features from the perspective of the features which fall below the threshold of being used in more than 5 per cent of turns, we see the list splits. Some of these features are ones used frequently by the L2 speakers, but not for this function – attributive adjectives, nominalisations, past tense and possession. Accordingly, we may say that the issue here is of the broadening of the function – as proficiency develops, we might assume that features that have clearly been acquired, which are part of the function in the target variety, come to be incorporated into this function. However, for four features – demonstrative determiners, numeral nouns, passive constructions and relative clauses – we have evidence that their use is lower in L2 examinee speech, relative to L1 examinee speech.

Hence, we may hypothesise that what we are seeing here is an issue of acquisition – the features have yet to be widely acquired by the population being studied and thus are absent. Of course, we could argue that the feature has been acquired; it is perhaps not used by the L2 speakers simply because they have not been encouraged to produce the feature for the exam. Can we gain any further evidence in favour of, or against, the two hypotheses for functional development proposed here, with two broad accounts of development in our data: acquisition and incorporation? We may find evidence that may strengthen our faith in the hypotheses presented. If not, we may wish to explore an alternative, such as it being an unintended consequence of the students’ learning environment. To do this, and to explore more broadly what acquisition of a discourse function may look like, the following section examines the grammatical features which seem to be driving this difference between the development of functions in L1 and L2 speech.

5.3 The Role of Demonstrative Determiners, Numeral Nouns, Passives and Relatives

In this section we use an exploratory approach to focus on the role of these four features. The approach is designed to make us stand back from the broad, discourse macro-structures we have been studying and turn our attention towards features of the micro-structures that underpin these macro-structures. The overall goal is to explore the development, if any, of these features across grades, but also what factors may be linked to that development and how that development may link to the acquisition of macro-structures. The exploration will, as part of the move from the macro to the micro, start with broad views which collectivise the learners studied. As the analysis progresses, the view shifts to a more fine-grained analysis of the data as we move closer to the individual in context.Footnote 3

Let us begin our exploration of these features by looking at their development across the three grades of exam in the L2 data. The results are shown in Table 5.2. All of the features in question steadily increase as the grade of exam taken increases. In the table, grades 7 and 8 are compared to the grade preceding them and the results shown in parentheses. For grade 6, the results in parentheses show how frequencies at grade 8 compare to those at grade 6 for each feature. So, in the rows for grades 7 and 8 we can see progress from a baseline set by grade 6. For the row including grade 6, we see the result of the students’ journey from 6 to 8. The results shown in parentheses are composed of two parts. The first part is a Log Ratio score which shows the size of the effect (positive or negative). Log Ratio is a logarithmic value showing the nature of the ratio of occurrence between the two frequencies compared, relative to the size of the corpus in which the observations were made. A Log Ratio score of one indicates the relative frequency is double in A compared to B, two indicates a quadrupling and so on. Negative scores may be interpreted the same way except that A is smaller than B in these cases. The second part of the comparison relates to a significance test: log-likelihood. To show this concisely, those effect sizes that are also linked to a test which shows that the frequency difference is significant are marked by an asterisk. No asterisk indicates that the result is not statistically significant. The number in superscript following the asterisk shows which of four thresholds the test has passed: these are one (95 per cent level), two (99 per cent level), three (99.9 per cent level) and four (99.99 per cent level). From this point onwards, in the interest of space, we will refer to the features studied using mnemonics – DEMDET (demonstrative determiner), N-NOUN (numeral noun), PASS (passive) and REL (relative clause).

Table 5.2 Frequency, per 10,000 words, of features by grade.

GradeDEMDETN-NOUNPASSREL
Grade 6 (L2)38.50 (LR 0.17*4)31.39 (LR 0.25*2)16.81 (LR 0.78*4)14.84 (LR 0.80*4)
Grade 7 (L2)41.31 (LR 0.1)33.69 (LR 0.1)25.88 (LR 0.62*4)19.37 (LR 0.47*4)
Grade 8 (L2)48.64 (LR 0.24*3)34.24 (LR 0.02)27.04 (LR 0.06)24.16 (LR 0.33*3)

These results suggest that there is support for both hypotheses. There is a steady increase in usage of all four features across the grades and the rank ordering of the features is stable – DEMDET>N-NOUN>PASS>REL. However, the increase in rate of usage for the features is not uniform – N-NOUN, PASS and REL experience their greatest surge of use at grade 7, while DEMDET experiences its surge at grade 8, producing a notable significance effect. Even where features surge in the same grade, the rate of increase is not uniform – so the effect measured at grade 7 shows PASS experiencing a greater proportional increase than REL, which, in turn, is increasing in usage faster than N-NOUN. The increases at this level for PASS and REL produce marked significance scores. The impression given is that N-NOUN has developed to a reasonably high degree of usage at grade 6 and then increases slowly over grades 7 and 8, while PASS and REL are somewhat infrequent at grade 6, but increase in use markedly across grades 7 and 8, though they remain less frequent than N-NOUN. Of the two, the increase in use of PASS abates at grade 8, while the usage of REL increases more markedly than PASS at grade 8, as indicated both by the effect size measure and significance statistic.

The trends discussed so far are all the more apparent if we consider the first row in the table, where the effect size and significance levels are based on a comparison of grade 8 and grade 6. The overall trajectory is one of increase. Yet while that trajectory, in terms of frequency of use, is uniform in rank terms for the features, there are notable differences in rate of increase and timing of increase. If we consider the overall increase in the data between grades 6 and 8, Log Ratio shows that the rate of increase runs (lowest to highest) DEMDET, N-NOUN, REL, PASS, as it does throughout the table. Yet there is also a clear bifurcation in the data, with PASS and REL increasing more markedly than the other two features. Across the board in this case, the results may be viewed as significant, in three cases markedly so.

So, for these features, the pattern of development is complex, even if they are all, from one perspective, behaving uniformly; that is, increasing across the grades and retaining a relative frequency ranking. Yet even that simple behaviour varies – with REL and PASS experiencing a greater proportional increase across the grades than the other two features.Footnote 4

A further question poses itself at this point – is the pattern of development uniform across tasks? Given the marked differentiation by task that has been observed so far in this book, we should be duly cautious about the patterns discussed so far. May it be the case that the acquisition is bound by task, for example, and at the task level we may see a more varied pattern of differentiation in the pattern of development of these features? Table 5.3 looks at the two major tasks shared by all three grades discussed: Conversation and Discussion. The normalised frequencies (per 10,000 words – PTKW) for Conversation are followed by a combined indication of the Log Ratio and log-likelihood score when the frequency of that feature in Conversation is compared to that feature in Discussion at the same grade.

Table 5.3 Frequency of feature in Conversation compared to Discussion.

Grade 6
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
PTKW28.50
(0.72*4)
23.14
(−0.73*4)
14.22
(−0.42*3)
12.70
(−0.39*2)
46.9638.3718.9916.66
Grade 7
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
PTKW31.98
(−0.65*4)
24.21
(−0.82*4)
20
(−0.65*4)
16.19
(−0.61*4)
50.1742.7131.4724.78
Grade 8
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
PTKW44.24
(−0.26*1)
22.78
(−1*4)
21.33
(−0.62*4)
24.76
(−0.12)
5345.5732.6826.95

Task clearly has an impact on the production of all four features. In every case, the number of examples, PTKW, of the four features in question are lower in Conversation than Discussion. As shown in the table, this tendency produces a strong effect at grades 6 and 7 across all four features, but by grade 8, in what we may hypothesise is a sign that acquisition is progressing through the population of learners, the effect is starting to abate, although it is still pronounced for N-NOUN and PASS. Overall, the results seem to suggest that the variation is conditioned by function, not acquisition. However, one should be mindful that there is a difference between the tasks – Conversation affords a lower degree of preparation to the student than Discussion. Discussion is controlled by the student – they select and introduce the task. So it may be that what we are seeing is, to some extent, influenced by acquisition – where a student can prepare a Discussion, they may engage more with features which, when they are required to shoulder the burden of greater spontaneity, they shun because of perceived difficulty, for example. We will explore that hypothesis in more depth shortly.

If we look vertically, rather than horizontally, at the tables, a slightly different picture emerges. This becomes clearer if we consider log-likelihood and Log Ratio scores for the data across the grades. In Table 5.4, we return to the view of the data and the comparisons in Table 5.2, but this time splitting the data by task. In the table, grade 7 is compared to grade 6 and then grade 8 is compared to grade 7.

Table 5.4 Frequency of features across grade compared by task.

ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
G7 to 60.170.070.49*10.350.1*10.15*20.73*30.57*2
G8 to 70.47*3−0.090.090.61*20.08*20.09*20.05*10.12*1

While all of the features in Discussion increase in frequency with grade, in Conversation N-NOUN does not increase markedly – in fact, from grade 7 to grade 8 the frequency of the feature actually declines slightly, as evidenced in the slight negative effect size in Table 5.4. However, in terms of effect size, we can see that while the relatively strong effects are quite differentiated in Table 5.4, the general picture is of greater effect sizes and significance scores between grades in Discussion relative to Conversation. The exceptions are DEMDET and REL in Conversation between grades 7 and 8. If we check the PTKW figures in Table 5.3, we can see that these effect sizes link to sizable increases in both of these features in this task, but when we consult the frequency of these features across the two tasks at grade 8 in Table 5.3, we see no strong effect – in essence we see usage in Conversation catching up with that in Discussion. This reinforces the view that Discussion is leading on the development, or at least the use, of these features.

Let us now move closer to the level of the individual. Are the trends in the data identified so far typical of all users, or some users? At any point in the development of the features in question, is their use smoothly distributed across all of the learners in the corpus? To explore questions like this, Tables 5.55.7 present information which can allow us to explore individual variation in the data. Each table represents one grade. Within the tables we report, for each task: (i) the average number of uses of the feature by learner turn; (ii) the standard deviation of that mean; (iii) the single greatest number of uses of each feature by a single learner; (iv) the number (and, following, a proportion) of speakers who use that feature at least once; (v) the number (and, following, a proportion) of speakers who do not use the feature at all; and (vi) the number (and, following, a proportion) of speakers whose usage lies above the upper bound of the standard deviation (the lower bound, 0, almost always falls within the standard deviation of the mean).

Table 5.5 Individual variation at grade 6.

Grade 6
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
Average0.990.800.490.441.931.570.780.68
Standard deviation1.231.120.880.782.011.731.161.10
Max7744121189
Speakers using278
54.51%
237
46.47%
160
31.37%
157
30.78%
368
72.02%
344
67.32%
228
44.62%
201
39.33%
Speakers not using232
45.49%
273
53.53%
350
68.63%
353
69.22%
143
27.98%
167
32.68%
283
55.38%
310
60.68%
Outliers52
10.2%
108
21.18%
58
11.37%
49
9.61%
89
17.42%
60
11.75%
96
18.79%
75
16.83%

Table 5.6 Individual variation at grade 7.

Grade 7
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
Average1.461.110.910.742.412.061.511.19
Standard deviation1.581.231.201.052.431.981.761.39
Max8766141497
Speakers using181
67.04%
164
60.74%
134
49.63%
121
44.81%
213
78.89%
205
75.93%
174
64.44%
161
59.63%
Speakers not using89
32.96%
106
39.26%
136
50.37%
149
55.19%
57
21.11%
65
24.07%
96
35.56%
109
40.37%
Outliers28
10.37%
35
12.96%
33
12.22%
51
18.89%
48
17.78%
27
10%
29
10.74%
42
15.56%

Table 5.7 Individual variation at grade 8.

Grade 8
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
Average2.241.151.081.252.712.331.671.38
Standard deviation1.821.511.431.502.442.221.771.52
Max98810121388
Speakers using122
81.33%
83
55.33%
78
52%
96
64%
121
80.67%
123
82%
106
70.67%
99
66%
Speakers not using28
18.67%
67
44.67%
72
48%
54
36%
29
19.33%
27
18%
44
29.33%
51
34%
Outliers14
9.33%
26
17.33%
23
15.33%
23
15.33%
20
13.33%
23
15.33%
19
12.67%
27
18%

Standard deviation reveals an interesting pattern across the grades – as the grade increases, the standard deviation of any given feature increases. This, we hypothesise to be an indication that, with increasing proficiency, there is a greater range of frequency with which a feature may be used; that is, as the possibility for a speaker to use a feature is realised, the standard deviation widens because the choices around the use of the feature broaden, for example, in terms of how often it may be used. Do any of the other data support that hypothesis? The Max value for the features generally does – when compared pairwise, the frequency of use of a feature in any given grade increases in the next highest grade. This is usually, rather than always, true and we should be mindful that the Max value shows us only the behaviour of the small number of examinees (or more typically, the examinee) who achieve that Max value. A better measure of what is happening in the population might be the number of speakers who exceed the upper bound of the standard deviation – that is, the number of speakers whose usage of a feature in a task at a particular level exceeds the bound set by the upper value of the standard deviation. Here the picture is mixed, as one might expect from the hypothesis that we are seeing more diversity in behaviour – there are some combinations of feature and task which have a steady growth in outliers across the grades, for example, PASS in Conversation. Yet much more typically, there is instability in the proportion of outliers across the levels, though if we look at the average, standard deviation, Max and outlier values together, the impression is strengthened that a diversity of behaviour with regard to these features is increasing across the grades. Tables 5.55.7 have a further piece of information that allows us to understand both the diversity of behaviour and the driver of the increase of frequency observed. More speakers are using the features as the grades increase. The diversity is driven by the use of the features spreading through the usage of the population of speakers. If we focus on the percentage of speakers not using a feature in a task, and compare those across the levels, we see that the decline in the proportion of speakers not using one of these features in a task is consistent, with only one exception – there is a slight increase in the number of speakers not using N-NOUN in grade 8 compared to grade 7, as noted already. So, we have powerful evidence for acquisition in the population here.

5.4 The Optic of Task

We must, of course, once more be mindful that the rate of acquisition, as evidenced by use, varies across the intersection of the features and tasks. Regarding task, for every feature, across all levels, the proportion of speakers not using the feature is greater in Conversation than Discussion. So in this case, the increased frequency of the features in Discussion, as already discussed, is mirrored by an increase in speakers using the feature in Discussion. The hypothesis forwarded for the frequency effect – the possibility that the student prepared the introduction of the Discussion encourages them to use features that some of them may normally shun – is certainly not contradicted by this finding. If we look pairwise across tasks, comparing each grade to the one below, for each task we can ask whether the proportion of speakers not using a feature in a task declines with increasing grade. In every case it does, with the exception of N-NOUN between grades 7 and 8, an anomaly we have already observed and discussed. So, again, we have evidence that, for both tasks and all features, the trend is towards a greater proportion of speakers using these features as the grade of exam increases. One last question we will ask of the speakers who do not produce these features is whether the features, in relative terms, are stable with regard to these speakers not using them—for example, is REL always the most shunned feature, proportionately, by speakers? There is a clear task effect here. For Discussion, which is partly prepared, there is total uniformity – in descending order, the greatest number of speakers not electing or able to use a feature in all grades is REL, PASS, N-NOUN, DEMDET. However, the less-controlled Conversation task has a more dynamic rank ordering. DEMDET is the least shunned feature, irrespective of grade. At levels 6 and 7, the other features are ordered as per Discussion. However, at grade 8, that ranking shifts to PASS, N-NOUN, REL. All three of these functions are altering notably in this task at this level: (i) there is an increase in speakers not using N-NOUN, as outlined earlier and in line with the negative effect reported for N-NOUN at this level compared to grade 7; (ii) the number of speakers not using REL falls notably from 58.15 per cent of speakers to 37.33 per cent; and (iii) the number of speakers not using PASS continues its steady decline. These three factors together alter this ranking.

Across the data we see drivers which begin to explain the diversity of usage implied by the widening standard deviation. Where a feature is largely absent from a population – as we see with REL at grade 6 – the capacity that exists for variation is relatively limited. Few speakers are using it and those that do use it generally do so infrequently. As more speakers use the feature, the frequency of the feature increases and the overall use of the feature rises. It should be remembered at this point that the usage itself is not a matter, we would argue, of random choice. The speakers are selecting the feature with a purpose in mind – and one of those can be the selection of a micro-structure to build a macro-structure which requires that function. Where the function is absent, then the micro- and macro-structures may not be realised ideally or, indeed, at all.

One final point we should consider is whether, in fact, there are some learners who do not use a feature in one task, but do use it in the other task – if this behaviour is widespread then the users not using a feature in Tables 5.55.7 are better understood as having acquired a function but electing not to use it in one of two tasks. Table 5.8 explores this, noting, for each feature at each grade, how many users do not use a feature at all—that is, in either task. As the figures produce results broadly similar to those in Tables 5.55.7, we will simply mention rather than discuss this here.

Table 5.8 The percentage of speakers not producing a specific feature in either task by grade.

DEMDETN-NOUNPASSREL
G614.68%19.96%42.07%46.58%
G76.67%12.96%18.89%24.81%
G85.33%12%17.33%16.67%

We shall start shortly on a journey back from this exploration of key elements of micro-structure, towards macro-structure, to demonstrate this building process and to consider diversity in the population in terms of acquisition of macro-structure. But before we do so we should not lose sight of the comparison with which we began this investigation– the L1 and L2 examinee data. To what extent is the trajectory of development of the four features examined in the L2 examinees converging on the usage of the L1 speakers? Table 5.9 looks, for Conversation in each of the grades, at the usage of these features by L2 speakers compared to their use by the L1 exam takers.

Table 5.9 Features at each grade in Conversation compared to the TLC L1.

DEMDETN-NOUNPASSREL
G6 to L1 LR−0.6*4−0.56*4−1.75*4−1.87*4
G7 to L1 LR−0.1−0.5*4−1.26*4−1.52*4
G8 to L1 LR0.04−0.59*4−1.17*4−0.77*4

The pattern in this table barely needs to be discussed. Grade 6 students are markedly different from the L1 exam takers, with under-use being present across the features, with notably high negative effect scores for PASS and REL. As we proceed through the grades, however, the size of this negative effect declines, though it remains pronounced for N-NOUN, PASS and REL. It seems that convergence for DEMDET is achieved with something like parity with the L1 examinee data being achieved for DEMDET from grade 7 onwards.

What of Discussion? So far, we have seen that the features under examination in that task tend to be both more numerous and widely used by the learners at all grades because, we posit, of the semi-prepared nature of this task. We might, therefore, expect that the convergence apparent in Table 5.9 would be accelerated in Discussion. Table 5.10 explores this.

Table 5.10 Features at each grade in Discussion compared to the TLC L1.

DEMDETN-NOUNPASSREL
G6 to L1 LR−0.38*4−0.12−1.95*4−1.80*4
G7 to L1 LR−0.29*30.03−1.22*4−1.23*4
G8 to L1 LR−0.21*10.12−1.16*4−1.11*4

If we establish a threshold of a log-likelihood score above the 99.9 per cent (*3) level for us to consider an effect as noteworthy, the contrast between the two tasks becomes very clear. N-NOUN, which remains a persistent point of divergence in Conversation, is not a point of divergence at all in the Discussion data – the results, using our benchmark, seem indistinguishable from the L1 data. Conversely, DEMDET, which ceases to be a point of divergence in Conversation at grade 7, is a persistent point of divergence in the Discussion data, though it falls below our log-likelihood threshold at grade 8. So DEMDET and N-NOUN seem to be subject to pressures of selection based upon task, while PASS and REL appear to be persistent points of divergence that we may assume link to acquisition within the population of learners studied.

Before returning to consider these tables, it is worth considering some of the other trends discussed so far. For example, what of the proportion of outliers in L1 data? With the L2 data, outliers were not discussed as the observation did not seem valuable, other than to say it was hard to discern a pattern. The same is true with both the L1 data and the L1 data in comparison to the L2 data. Some of these features at L1 have a higher proportion of outliers than evidenced at grade 8 in Table 5.7 (e.g. in Discussion 12.67 per cent at grade 8 versus 19.58 per cent for PASS) while others are lower (e.g. in Discussion 15.33 per cent at grade 8 versus 8.99 per cent for L1 for N-NOUN). Outliers do not seem to be diagnostic of acquisition, in part, perhaps, because the speakers in the corpus have choices – they are selecting discourse units with a purpose in mind. If their purpose varies, or if they select one of a number of ways of achieving a purpose, then, where those discourse units rely on different micro-structures, we see a degree of permissible variability in frequency of a feature across speakers. We might assume that the volatile nature of outliers is a reflection of this. More telling is the standard deviation from the mean of uses per utterance and the number of speakers using none of these features. The trend we saw in the L2 data was for the standard deviation to broaden and the proportion of speakers not producing these features to contract. Table 5.11 shows the standard deviation per feature in each task in the L1 data.

Table 5.11 The standard deviation of four grammatical features across the Conversation and Discussion tasks in the L1 data.

DEMDETN-NOUNPASSREL
Standard deviation – Conversation1.992.022.141.72
Standard deviation – Discussion2.172.342.211.93

In general, the standard deviations have widened from what we saw in grade 8 – this is true for all values in Conversation. In Discussion, it is true of all features with the exception of DEMDET, where the standard deviation is slightly lower. However, overall, the pattern that we have seen – that with escalating proficiency we see escalating diversity in usage between speakers – holds here, we would argue. With regard to users not using a feature, Table 5.12 compares the L1 users to the grade 8 speakers, allowing us to explore whether the trend we saw in Tables 5.55.7 continues into the L1 data. It does. This aligns well with the discussion earlier of acquisition being about the spread of these features across the population, and that spread enhancing diversity in usage which, in turn, widens the standard deviation. Of particular note in the table are the very notable differences in usage of REL and PASS.

Table 5.12 Users not producing a grammatical function, by task, in grade 8 L2 data and the L1 data.

Grade 8
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
Speakers not using28
18.67%
67
44.67%
72
48%
54
36%
29
19.33%
27
18%
44
29.33%
51
34%
L1 Data
ConversationDiscussion
DEMDETN-NOUNPASSRELDEMDETN-NOUNPASSREL
Speakers not using12
9.52%
49
25.93%
9
4.76%
21
11.11%
28
14.81%
43
22.75%
29
15.34%
19
10.05%

To swiftly consider a comparison to Table 5.8, in the L1 data we see much lower rates of failure to produce these features in either task (see Table 5.12), as Table 5.13 shows, where the L1 and grade 8 data are contrasted. So the idea that the spread of the features across the population is a key part of the development of the usage of the feature is corroborated by this observation.

Table 5.13 Speakers not using a specific feature in either task in the L2 grade 8 data and the L1 data.

DEMDETN-NOUNPASSREL
Percentage of users not producing this function in either task at grade 85.33%12%17.33%18.67%
Percentage of users not producing this function in either task in the L1 data2.12%6.88%1.06%0.53%

Returning to Tables 5.9 and 5.10, while both tasks show a pattern of convergence, the nature and speed of that convergence varies by task. Curiously, it is the more spontaneous task where that convergence becomes most marked. We may think of various hypotheses to explain this but the one we will explore here relates to another factor we have yet to discuss – the examiner. To what extent may differences in examiner prompts explain what we are seeing here? The hypothesis that we will explore is that learner perception of the difficulty of use of these features lingers into grade 8. On the one hand, given the opportunity to prepare part of a task, as happens in Discussion, they will use these features, but not as frequently as L1 speakers. On the other hand, in the more spontaneous Conversation task, the examiner has more scope to prompt the production of these forms. Exploring these hypotheses fully is demanding, so we will satisfy ourselves with two observations here, either of which may corroborate or refute these hypotheses. Firstly, in terms of examiner behaviour, we look at how often the examiner scaffolds the production of these features by modelling them for the learner—that is, the examiner uses the feature and this then stimulates the learner to produce it also. If this is more marked in Conversation, we have a possible explanation for the patterns in Tables 5.9 and 5.10. Secondly, if the features in Discussion are associated with pre-prepared material, we should see them having a strong preference for occurring at the start of the task – Discussion begins with the pre-prepared material. If the students are, in fact, using these features in Discussion disproportionately at the start of the task, this is a corroboration of the hypothesis that it is the partially pre-prepared nature of Discussion that drives some of the differences between the tasks.

5.5 The Role of the Examiner

Let us examine the hypothesis that we may be seeing behaviours prompted by the examiner. To explore this, we looked at all examples of where a feature we are focused upon occurred in a turn spoken by an examinee after that feature had been used in the preceding turn by the examiner. If the student is echoing features at lower levels of proficiency, then, as the grade of the exam increases, we might reasonably expect to see a steady decline in this behaviour. Of course, it could be that we may see the reverse – perhaps mirroring the speech of the examiner is a good tactic that better performing students display. But whatever emerges, if we have a consistent pattern of change across the grades in each task, we would have strong evidence to suggest that the student echoing the examiner was a possible explanation for some of our observations. Tables 5.14 and 5.15 show the percentage of uses of each of the four features by L2 examinees which occur in a turn following one in which the examiner has used the features.

Table 5.14 The percentage of uses of a feature in examinee speech following the use of the same feature by the examiner in Conversation.

DEMDETN-NOUNPASSREL
G6 Conversation7.33%4.15%2.38%1.33%
G7 Conversation4.05%5.02%5.26%2%
G8 Conversation2.08%1.16%3.70%2.66%

Table 5.15 The percentage of uses of a feature in examinee speech following the use of the same feature by the examiner in Discussion.

DEMDETN-NOUNPASSREL
G6 Discussion4.88%4.23%1.51%0.57%
G7 Discussion2.15%4.50%2.20%0.62%
G8 Discussion4.18%4.57%3.59%1.45%

The first thing to note is that, for each task and at all levels, the proportion of uses of any of the four features that occur directly after the use of the feature by an examiner represents a small proportion of the data, irrespective of whether the task is spontaneous (Conversation) or potentially partially prepared (Discussion). This casts immediate doubt on this being a substantial source of explanation for the variations observed. Secondly, there is no consistent pattern in either table. The strong patterns that we saw emerge earlier in the chapter are simply absent here. No feature shows a consistent pattern of behaviour across both tasks. So, while echoing of the functions used by the examiner does occur, it is not systematic to the extent that we can look to it as a source of explanation for the patterns of acquisition we have observed so far.

If we shift to consider the second hypothesis, that the ability to plan some of the Discussion task may distort our perception of proficiency in that task, what do we see? Determining where the potentially prepared part of the Discussion ends in the task is not easy – it is not marked formally. So we compared the distribution of the features by turn, normalised PTKW, in the Discussion and the Conversation tasks. Our goal was to see whether the two correlated well and, if they did not, whether there was a clear distortion in the distribution, with the features of interest clustering towards the start of Discussion. The results show that the distribution of the features is highly correlated – with Pearson rank correlation scores of 0.863 (DEMDET), 0.808 (N-NOUN), 0.86 (REL) and 0.853 (PASS).Footnote 5 Of course, the correlation is not perfect and it may be that, for example, in the first five turns produced by the examinee, in which we are likely to see any effects from preparation, the correlation would be weaker. If we compare the normalised Conversation and Discussion data looking at the first five turns from the examinee only, we find that, if anything, the correlation is even stronger, with scores of 0.9 (DEMDET), 0.872 (N-NOUN), 0.9 (PASS) and 0.9 (REL).Footnote 6 So again, there is no evidence that the distribution of the features is skewed in Discussion relative to Conversation. Thus, at least in so far as we have been able to explore whether this happens, we can set aside the argument that preparation for the task alone explains the frequency of these features in the Discussion task – their elevated frequency is, we would argue, more likely due to the demands of the task itself. When we look at some Discussion examples, it is clear that the opportunity for producing lengthy prepared sequences is not really present – examiners ask questions of the students which effectively counter such a tactic. If a student tries to produce a monologue, examiners are more than prepared to intervene to disrupt the discourse strategy, as in the following case from file 2_7_IN_51, where a student graded B for their Discussion tries to launch into a monologue:

(52)

E: so what are you going to talk to me about?

S: I’m going to talk about my topic is on ambitions standing for capabilities

E: okay

S: so first do our ambitions always define our capabilities? in my opinion yes ambitions define our capabilities because if we know what we are capable of we can choose that ambition and we can go to that field and we can make interest in that field and learn new things ambitions an easier way to take here following our ambitions is an easier way to take because we can experiment new things in that field and learn more and more with that field but there’s never the success we get failure also but we have to try again how do teenagers usually shape their ambit=

E: this is whoa whoa s= calm down

S: yeah

E: calm down let me erm ask you some questions

So while it is possible to see some monologues in the Discussion data, they are clearly not part of the task and are challenged by the examiner. Hence, Discussion does not really militate in favour of the production of lengthy, pre-prepared contributions.

5.6 The Micro- and the Macro-Level

At this point we can marry an appreciation of the micro- with the macro-level in two ways. Firstly, with regard to linguistic features – the acquisition of micro-structures important to the generation of a macro-structure is key to the L2 speaker being able to generate that macro-structure. Secondly, at the micro-level in a population (the individual) we may find behaviours not readily observed if we limit ourselves to exploring behaviours that characterise the population in general. To show how these two points marry, we will explore how outliers, such as those highlighted in Tables 5.55.7, intersect with the functions at the micro- and macro-structural levels in the TLC.

To begin with, we will briefly present the discourse unit functions in the TLC L1. These are shown in Table 5.16. As with the turn-level functions, we note the labels assigned to the function and the number of features only. Many of these functions are, by now, familiar from prior analyses while the new functions, marked with an asterisk, will be discussed in Chapter 7.

Table 5.16 Functions at discourse unit level in the TLC L1 in tasks shared between L2 (grades 6–8) and L1 speakers (grade 12).

DimensionLabelNumber of features linked to the function
Dim. 1 +veDiscourse Management*37 (all absences)
Dim. 1 –veExtended Narrative*41
Dim. 2 +veRealis29
Dim. 2 –veIrrealis19
Dim. 3 +veAffective*34
Dim. 3 –veInformative and Instructive1
Dim. 4 +veSeeking and Encoding Stance23
Dim. 4 –veInformational Narrative20
Dim. 5 +veInformation Seeking23
Dim. 5 –veSituation-Dependent Commentary*22

With a firm view of the discourse functions present at the micro- and macro-structural levels, we can begin to explore how outliers in the analyses presented so far and discourse functions (at the micro- and macro-structural levels) interact. This quest may seem hollow – our analyses in Chapters 24 have already covered this. However, it is worth noting that the discourse units in Chapters 24 were identified based on the pattern of co-occurrence of features across the population in the corpus. It is a view of the data which characterises the population as a whole. Through that, the language of individuals is characterised by the categories identified; that is, our description of functions derives from the whole population and is then used to account for all data in the corpus.Footnote 7 Yet it could well be that an individual may vary from that normative description in part or whole. It is also possible that, if we see such variation, we may then return to the TLC L1 analysis and, perhaps, assess whether that variation represents a closer fit to TLC L1 functions.

Our way of finding outliers is through the observations made in this chapter thus far – while outliers are not useful in showing us systematic differences between the L2 and L1 speakers, it may be that in the L2 outliers we find speakers who are outliers because some features of their usage are functionally atypical of the general population. So, we looked at the outliers – speakers whose use of the four features studied in this chapter was non-normative – and focused on users who were outliers with regard to the use of DEMDET, N-NOUN, PASS and REL, and who, in a single utterance, used the four features together. Having identified such turns, we can then see what discourse unit function this turn is associated with and whether it is in any sense unusual, functionally.

This rough yardstick for identifying outliers who may be of interest yielded few examples – six in the whole TLC.Footnote 8 Also, these outliers did not directly yield what we hypothesised – evidence that a speaker might be producing an unexpected function. All of the turns in which the four features occur together do so in discourse units strongly associated with positive Dimension 4 (Informational Narratives) of the TLC discourse units analysis, as presented in Chapter 4 (see the discussion of Table 4.7). They are also nearly all from the Discussion task (five out of the six, with the final example from Conversation) and are produced by six separate students. These students received marks of A (one student), B (four students) and C (one student) for the task in which they used the example. No grade 6 students were in the examples. Four of the examples were produced by grade 8 students, while the other two examples were presented by grade 7 students. While impressionistic, we might hypothesise that grade and task are important as part of the explanation for these examples – they occur above grade 6 and seem facilitated by Discussion. A closer look at the examples may yield insight into why this is the case. Consider the discourse unit that follows, from the Discussion section of an exam taken by an Indian grade 7 student (corpus file 2_7_IN_28) who was awarded a B for this task:

(53)

E: tell me more

S: er the this company was started in the nineteen sixties er it made a l= it made its first car which was a Four Hundred GT and Three Hundred GT er they were not very successful but they’re successful which was the Lamborghini Miura it was a very very <unclear text=‘nit’/> car and er <unclear text=‘it i= was at’/> in nineteen sixty one to nineteen seventy it were it held the r-record for the fastest car in the world

E: right how fast?

S: it was around two hundred and er fifty two two hundred and sixty kilometres per hour

This discourse unit is close to being considered a prototypical Informational Narrative discourse unit – of the 10,339 discourse units which are given a positive dimension coordinate for this function, this discourse unit ranks 178th in its association with Informational Narrative. Its reasons for being so strongly associated with this side of the dimension are obvious – the student is clearly quite passionate about Lamborghini and is trying to persuade the examiner how excellent the company is by telling the story of its development. Also, in terms of form-to-function relationships, the example aligns well with the features whose presence marks an Informational Narrative, as Figure 5.2 shows. Viewed through the lens that the analysis of all students provides, the discourse unit is clearly Informational Narrative.

Figure 5.2 Features present in the Informational Narrative function in the TLC – grammatical functions emboldened are shared with the TLC L1 Informational Narrative function and those functions underlined are present in Example 53.

So it would appear that, while the Informational Narrative function, and the Discussion task in particular, seem to permit some learners to produce rare combinations of grammatical functions in the same turn, this does not impact at the level of discourse function.

Returning to the micro-structural level, might it be the case that our outliers, while they do not form a different function at the discourse unit level, do constitute a new function at the micro-structural level? This is the level, after all, at which the four features we are observing come together to produce our six outliers. If we look at the categorisation of these turns through short-text MDA, they can clearly be categorised as Elaborated Speech (Dimension 1 positive, see Table 5.1) – all of the examples, when mapped to this dimension, have positive polarity and this is the only dimension where all of the examples have a contribution score higher than 0. In Dimension 1, the contribution scores of these examples range from 0.002 to 0.005, while their dimension coordinates lie between 0.874 and 1.323. There are 199,066 turns in the TLC, 83,806 of which have positive polarity for Dimension 1. If we rank the six examples on their dimension coordinate and consider their distribution across quintiles from the first (top) to fifth (bottom), we find that all of the examples are in the top quintile of dimension coordinates for turns with an Elaborated Speech function. Example 53 is the highest ranked, appearing in the top 1 per cent of all turns on the positive side of Dimension 1.Footnote 9 So, once again, while these examples vary from the norm, this does not impact upon discourse function at the micro-structural level any more than it does at the macro-structural level.

5.7 Next Steps

Having failed to see any meaningful variation that might be linked to proficiency in our outlier analysis so far, we can turn to consider how similar the functions in the TLC are to those in the TLC L1. However, we must pause before we undertake such an analysis. In order to facilitate the investigation of the L1 and L2 functions, one important factor which constrains, and possibly distorts, behaviour in both corpora examined has yet to be considered: the construct on which the exam underlying the data is based. The exam is composed of a series of tasks and those tasks are supposed to prepare the students taking the exam for encounters with proficient British English speakers in everyday contexts. While we can see how L1 British speakers act when required to perform in accordance with the construct, and we can see how the L2 speakers may be said to converge or otherwise with that behaviour, we cannot conclude that the L1 speakers would produce the same discourse functions in everyday conversational British English – we should entertain the possibility that the exam itself, though it seems to generate a set of linguistic behaviours across the population that we may well describe, may actually require the speakers to produce English in a way that is atypical of everyday conversational interaction. For example, Discussion requires speakers to prepare to lead a discussion on a particular topic and they are afforded the opportunity to prepare at least how they will initiate that conversation. There are analogous situations in spoken English – introducing your work in the context of a job interview, for example. But while, on the one hand, this task may make a lot of sense from the point of view of helping learners to prepare for an exam, it does not, we think, represent a very common form of interaction for native speakers – so the construct, in this case, may be said to skew towards uncommon events of a sort it might be argued that the L2 speakers are, in fact, unlikely to encounter. On the other hand, it may be that while the framing of the task is not something L1 speakers would experience frequently, the linguistic resources – in this case discourse functions – that they would draw upon to fulfil the task would come from a versatile repertoire of discourse functions that we would encounter in a broad range of contexts of spoken interaction. To explore the utility, or otherwise, of the construct on which the exam is based, in the following chapter we undertake a brief short-text MDA of discourse functions in the Spoken BNC 2014. Following that, in Chapter 7 we will reflect on any apparent similarities and differences between the performance of the L1 speakers in the Trinity exam and the performance of L1 British English speakers in everyday spoken interaction.

Footnotes

1 While it might be argued that in some cases the goal of L2 English acquisition is not native-like performance, the aim being English as a lingua franca instead, for example (see, e.g. Curry, Reference Curry2019), we should note that the students represented in our corpus are enrolled for courses which lead to an exam in which they are judged explicitly against L1 British English norms. So, in this case, this seems a safe assumption to make.

2 Each grammatical feature has two categories, namely presence or absence. Thus, while only 60 features occur in more than 5 per cent of the turns, this means that there are actually 120 features overall.

3 Note that the shift to the individual is a deliberate strategy and research choice here, in line with the injunction of McEnery and Brezina (Reference McEnery and Brezina2022: 220) to ensure that an ‘abstraction away from the individual should not exclude or obscure the individual and variation from the normative’.

4 There are echoes in these findings of those of O’Keeffe and Mark (2017) regarding the non-linearity of SLA. The findings of these authors regarding speaker-external variables, specifically for this chapter task, and the competence-based approach they take, are also mirrored in our work.

5 In each case the p value for the correlation is well beyond the 99.9 per cent level, with scores of 5.669329E-21, 1.334889E-16, 1.173599E-20 and 4.878735E-20 respectively.

6 The p value scores are 0.08. 0.05, 0.08 and 0.08, respectively.

7 Note here, in the words of McEnery et al. (Reference McEnery, Brookes, Hanks, Gerigk and Egbert2023: 88), we are seeing whether the normative view obscures individual variation. Our goal overall should be not to lose focus on the individual (‘principle 32’).

8 For the total number of outliers at each level for each feature, see Tables 5.55.7.

9 In terms of the percentage in which the examples are ranked, from highest to lowest, the turns are in the top 0.8 per cent, 2.9 per cent, 4.1 per cent, 4.6 per cent, 10.1 per cent and 10.8 per cent of turns.

Figure 0

Figure 5.1 Features appearing in 5 per cent or more of the L1 turns which do not appear in 5 per cent or more of the L2 turns.

Figure 1

Table 5.1 Functions at turn level in the TLC L1 in tasks shared between L2 (grades 6–8) and L1 speakers (grade 12).

Figure 2

Table 5.2 Frequency, per 10,000 words, of features by grade.

Figure 3

Table 5.3 Frequency of feature in Conversation compared to Discussion.

Figure 4

Table 5.4 Frequency of features across grade compared by task.

Figure 5

Table 5.5 Individual variation at grade 6.

Figure 6

Table 5.6 Individual variation at grade 7.

Figure 7

Table 5.7 Individual variation at grade 8.

Figure 8

Table 5.8 The percentage of speakers not producing a specific feature in either task by grade.

Figure 9

Table 5.9 Features at each grade in Conversation compared to the TLC L1.

Figure 10

Table 5.10 Features at each grade in Discussion compared to the TLC L1.

Figure 11

Table 5.11 The standard deviation of four grammatical features across the Conversation and Discussion tasks in the L1 data.

Figure 12

Table 5.12 Users not producing a grammatical function, by task, in grade 8 L2 data and the L1 data.

Figure 13

Table 5.13 Speakers not using a specific feature in either task in the L2 grade 8 data and the L1 data.

Figure 14

Table 5.14 The percentage of uses of a feature in examinee speech following the use of the same feature by the examiner in Conversation.

Figure 15

Table 5.15 The percentage of uses of a feature in examinee speech following the use of the same feature by the examiner in Discussion.

Figure 16

Table 5.16 Functions at discourse unit level in the TLC L1 in tasks shared between L2 (grades 6–8) and L1 speakers (grade 12).

Figure 17

Figure 5.2 Features present in the Informational Narrative function in the TLC – grammatical functions emboldened are shared with the TLC L1 Informational Narrative function and those functions underlined are present in Example 53.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×