Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-07T22:34:41.310Z Has data issue: false hasContentIssue false

The Course Perceptions Questionnaire: Development and Some Pilot Research Findings

Published online by Cambridge University Press:  20 November 2018

Get access

Abstract

This study describes the development of a questionnaire for measuring instruction and studying the interaction between instruction and learning. Instruments such as the Course Perceptions Questionnaire (CPQ) can help identify how different instructional methods affect what and how students learn and thus should help determine whether there is a single best method of instruction that results in the most learning by all students. Also, instruments like the CPQ can provide instructors with information on how their instruction is perceived by students (and, more generally, whether the methods instructors believe they are using are, in fact, the methods they are using) and how students respond to their instruction.

Type
Research Article
Copyright
Copyright © American Bar Foundation, 1981 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Hedegard, James M., The Impact of Legal Education: An In-Depth Examination of Career-relevant Interests, Attitudes, and Personality Traits Among First-Year Law Students, 1979 A.B.F. Res. J. 791.A study by Edward L. Kimball, Larry C. Farmer, & D. Glade Monson on the effect of study time on examination performance at BYU will appear in the next issue of the Research Journal.Google Scholar

2 For a detailed and perceptive description of law classroom instruction in different law school settings, see Charles D. Kelso, The AALS Study of Part-time Legal Education: Final Report, Association of American Law Schools 1972 Annual Meeting Proceedings, Part One, Section II (Washington, D.C.: Association of American Law Schools, 1972).Google Scholar

3 Barak Rosenshine & Norma Furst, The Use of Direct Observation to Study Teaching, in R. W. M. Travers, ed., Second Handbook of Research on Teaching (Chicago: Rand-McNally, 1973).Google Scholar

4 Michael J. Patton, The Student, the Situation, and Performance During the First Year of Law School, 21 J. Legal Educ. 10, 33 (1968).Google Scholar

5 One of the questionnaires most widely used in university settings is the Brown-Holtzman Survey of Study Habits and Attitudes. The Brown-Holtzman questionnaire differs from many student study habits questionnaires by including items that ask students about their interests, motivation, and attitudes toward studying and learning in formal educational settings. See Brown, William F. & Holtzman, Wayne H., A Study Attitudes Questionnaire for Predicting Academic Success, 46 J. Educ. Psych. 75 (1955).Id., ssha Manual: Form C (New York: Psychological Corporation, 1967).Google Scholar

6 Loftman, Guy R., Study Habits and Their Effectiveness in Legal Education, 27 J. Legal Educ. 418 (1975). The questionnaire is presented as appendix D in the article (at 459).Google Scholar

7 Apparently pooling the four courses, Loftman obtained a multiple correlation between course grades and a set of three study behaviors: number of hours of class attendance, number of hours spent reviewing class notes and other materials you developed yourself (both of these variables were positively correlated with grades), and number of hours spent doing unassigned work in Gilbert Law Summaries and the like (the correlation here was negative). Id. at 439.Google Scholar

8 Id. at 433.Google Scholar

9 Supra note 4, at 21, 45.Google Scholar

10 Another form of the cpq has been developed: a form in which the instructor describes his/her instructional techniques and his/her perceptions of learning-relevant student responses to these techniques.Google Scholar

11 A detailed description of the byu student group, in terras of their backgrounds and their fit into the general population of American law students, appears in Hedegard, supra note 1, at 812.Google Scholar

12 These questionnaries, items in them relevant to some of the findings reported in this article, and scoring procedures are described in detail in id. at 809.Google Scholar

13 By the end of the second semester, the students would begin filling out the cpq before the student proctor could begin reading the by now familiar instructions. In at least one class the proctor, diligently beginning his task of reading instructions, was silenced with “Sit down and save your breath, we know the rules by heart!”.Google Scholar

14 We used the random student numbers to provide anonymity for each student and confidentiality of the student's responses. All student data were received and processed by us accompanied only by the code numbers. Students affixed their own code numbers to response sheets. The only links between the code numbers and student names were (1) a list kept in the dean's office at BYU and (2) students' personal records of their own code numbers. The American Bar Foundation has no copy of the list. Thus we could link student responses to code numbers, and the byu dean's office personnel could link code numbers to student names, but neither could link student names to their responses. The name-code number list was maintained so that students completing questionnaires could obtain their code numbers if they had forgotten them and dean's office personnel could send information, such as grades, identified to us only by code numbers.Google Scholar

15 The assignment of faculty members to course sections is depicted in the table below. The 11 first-year instructors are numbered 1 through 11 in the table. The five major courses are labeled A through E.Google Scholar

16 Occasionally a student would be absent from class the day the cpq was filled out for that class. The student might then complete the cpq for that class in another class (after completing the cpq for the other class) or outside of class time. The class-identifying items were included so that the student's cpq could be included in tallies for the correct class. The student form of the cpq employed in this study is reprinted in its entirety as appendix A at the end of this article.Google Scholar

17 The reader, on reading descriptions of these indices later in this section of the paper, may feel that certain student indices might be appropriately classified as an instruction index and vice versa. Careful reading of the index descriptions, however, should make it clear at least why we placed each index into its category.Google Scholar

18 On instruction indices, the aspects of instruction tapped by the items on an index tended either to all be present in an instructor's technique of instruction or all be absent. E.g., among the instructors in our byu sample, those who frequently have “students argue different sides of the same issue” in class (cpq item 38) tend also to have “students compare the value of alternative courses of action or positions on issues and take a position on their relative desirability” (cpq item 39). Conversely, instructors who seldom do one of these things seldom do the other. Thus the two items have both become part of the Socratic method index.Google Scholar

On the student indices, the various student reactions to instruction tapped by the items on the index tended, in a single student, either to occur altogether or to be absent altogether. E.g., students who reported being “quite relaxed” when they anticipated being called on in class (cpq item 31) also tended to report themselves “quite eager” to express disagreement with the instructor in the classroom (cpq item 32). Conversely, students who reported themselves “quite tense” when they anticipated being called on in class also tended to report themselves “quite reluctant” to express disagreement with the instructor. Thus, these items have both become part of the confidence index.Google Scholar

19 Decisions as to which items are combined into indices are inevitably subjective, in part. Such decisions are guided by both the theory under which the items were constructed and mathematical data on how items intercorrelate.Google Scholar

20 The differences among students in their perceptions of instruction within a course section are due largely to student differences in expectations about the instructional methods that would (and/or should) be employed in a classroom, differences in prior exposure to various instructional methods, personality differences, etc.Google Scholar

21 In constructing the indices, we relied on item content more than on the mathematical data derived from the correlational analyses because of our concern that item groupings based on our correlation data from only 20 course sections at one school (and only 11 instructors) would be relevant only to the school under study.Google Scholar

22 In fact, the analyses of item response correlations within single course sections yielded very similar patterns of item groupings.Google Scholar

23 In a version of the cpq designed to be completed by the instructor of a course section, high and low scores on each index represent the same instructional methods and student reactions but as viewed by the instructor. In an observational coding scheme designed for the use of classroom observers who are neither students nor the instructor of the course, high and low scores would again represent the same instructional methods but would be derived from syntheses of observations of classroom events.Google Scholar

24 Both the explanation and clarification indices indicate the extent to which instructors endeavor to make sure students understand course material, even if such understanding requires specific explanation by the instructor. Although we can imagine instructors providing clarification short of explicitly explaining course material (by, e.g., relying on students to clarify points via their own studying and class discussion and assessing such clarification by suitable questioning of students), our pilot data suggest that instructors who strongly seek clarity will explain material themselves in order to achieve such clarity.Google Scholar

25 This index isolates one form of instructor explanation and thus is part of the explanation index described above. Despite substantial overlap between the content of the two indices, I decided to examine the lecture index as a separate index during this pilot stage of research on the cpq.Google Scholar

26 The reader may remark here that some areas of the law (such as criminal law) stress, in their nature, issues of right versus wrong whereas others (such as contract law) tend, again in their nature, to stress issues of rights versus rights. Nonetheless, within each of these areas instructors may vary in the extent to which they elevate one perspective over the other. For example, criminal law faculty may vary markedly one from another in the extent to which they devote class time and student attention to issues such as the rights of the accused or provocation and in the legitimacy they attach to such issues.Google Scholar

27 The method of calculating the course section mean student score for a single course section is described below (p. 479 infra).Google Scholar

28 A scores of 173 would mean that a student reported studying 24 hours a day seven days a week on material in the course (and therefore spent no hours on other courses). Five units are added to the total number of hours possible, a sum equal to the minimum value of the index for five courses.Google Scholar

29 The narrower ranges of course section mean student scores on the student indices suggests that students respond to various law courses and instructors uniformly relative to the variations in instructional methods (and, perhaps, instructional objectives) they find in the various courses. Such an inference may be correct, but it cannot be made on the basis of the data in tables 1 and 2 alone. To make such an inference from these data alone requires evidence that the possible ranges of student indices represent the “same behavioral variations” as the possible ranges of instruction indices. Given the different contents of the various scales, such evidence is not available. It is not clear such evidence can be obtained. (Problems of comparing different kinds of behaviors have occupied the attention of philosophers and psychologists for centuries.).Google Scholar

30 The scoring schemes for the instruction and student indices are described at pp. 473–79 supra and notes 33–34 infra.Google Scholar

31 For example, the course section mean score on the Socratic method index for the large section of contracts 1 would be the mean of the Socratic method raw scores of the students in that section.Google Scholar

32 In characterizing a student's total instructional experience in the first law school year by the 16 total section mean raw scores, we hope to provide measures of experience that are minimally colored by the student's idiosyncratic perceptions of classroom events. These scores, however, will reflect collective distortions that characterize large numbers of fellow students.Google Scholar

33 E.g., consider the calculation of a student's intrasection deviation score on the Socratic method index in the small section of contract law II. Suppose the student's raw score in contract law II on the Socratic method index was 12.00. Suppose, further, that the course section mean score on the Socratic method index was 10.93 and the standard deviation of the scores of the various students was 1.00. In this case the student's intrasection deviation score on the Socratic method index would be calculated as follows: intrasection deviation score = (raw score - course section mean score)/standard deviation of all student raw scores = (12.00–10.93)/1.00 =+ 1.07.Google Scholar

34 E.g., consider the calculation of a student's intersection deviation score on the Socratic method index in the small section of contract law II. Suppose (as in the last example) that the student's intrasection raw score in contract law II was 12.00. Suppose, further, that the mean of the student's ten intrasection raw scores (obtained from the student's own cpq responses in each of his/her ten first-year course sections) was 13.20 and that the standard deviation of the student's ten intrasection raw scores on the Socratic method index was 0.80. In this case, the student's intersection deviation score on the Socratic method scale would be calculated as follows: personal section deviation score = (raw score-mean of student's ten raw scores)/standard deviation of student's ten raw scores = (12.00-13.20)/0.80 = - 1.50.Google Scholar

35 Again, the scoring schemes for the student index raw scores are described at pp. 476–77 supra.Google Scholar

36 P. 479 supra.Google Scholar

37 In our analyses to date we have made no use of total section mean response raw scores. Such scores, analogous to total section mean raw scores on the instruction indices, would describe the average responses of all students in all ten course sections taken by an individual student. Such a form of the student indices may be useful for some research purposes.Google Scholar

38 E.g., one of the questions we examined in our pilot study was whether students who tailored their study behaviors to different instructional methods would be more successful (in terms of course grades) than students who adopted a more uniform response to such method differences. One approach to answering such a question is to compare the grades earned by students who deviate from the modal responses of students in their courses to the grades earned by the more modal students.Google Scholar

39 P. 479 supra.Google Scholar

40 Personal section deviation scores were useful, e.g., in determining how and to what extent student study behaviors in courses emphasizing aspects of the Socratic method (as captured by the Socratic method index) differed from study behaviors in courses in which instruction was primarily in the form of lectures or lecturettes (as measured by the explanation, summarization, clarification, and lecture indices). Also, these scores provided another perspective from which to examine benefits and costs of tailoring study behaviors to different instructional methods.Google Scholar

41 Pp. 479–80 supra.Google Scholar

42 Supra note 3.Google Scholar

43 Moderate scores on lecture, Socratic method, and problem solving would indicate a course section in which the instruction blended lecturing and Socratic discussion as well as emphases on both acquisition of legal information and problem-solving skills. The “maximal” method variation referred to here is, of course, the maximal variation that could be identified by the cpq instruction indices. By means of cpq instruction items, a more comprehensive measure of variability of method within a classroom might be devised.Google Scholar

44 This dimension may deserve an item on a future form of the cpq.Google Scholar

45 The analysis of variance is a statistical technique that provides an estimate of the probability that several different samples of scores could have been drawn from the same population. The differences among the mean scores from the several samples are compared with the differences among the scores within the samples. In general, the larger the differences among the sample mean scores relative to the score differences within the samples, the more likely it is that the samples were drawn from different populations (i.e., from populations exposed to different conditions—in this case to different teaching methods). As applied here, the samples are the groups of students in each of the 20 course sections. Suppose we examined the scores on the Socratic method index obtained from each student in each of the 20 course sections. Suppose further that we compared the differences among the 20 course section mean scores (i.e., the averages, for each section, of all student scores in that section on that index), as measured (in statistical terms) by the “variance” among the 20 section mean scores, with the differences among index scores within the 20 course sections, as measured (again in statistical terms) by pooling the index score variances of the 20 course sections. If the variance among the course section mean scores was large enough relative to the pooled variance within the course sections, we could infer (by reference to an appropriate probability table) that the instructors in the 20 course sections varied significantly (again in statistical terms) in the extent to which they employed the Socratic method (as described by the Socratic method index).Google Scholar

For more information on the analysis of variance and the theory and problems of statistical inference, see an elementary statistics text such as William L. Hays, Statistics for Psychologists 301–35, 356–73 (New York: Holt, Rinehart, & Winston, 1963).Google Scholar

46 These data on small-large section differences are discussed in more detail later in this article.Google Scholar

47 See pp. 503–6 infra.Google Scholar

48 Once again, these statistically significant results mean that, on each of the student indices, at least 2 of the section means (out of 20) were sufficiently different from each other that we can reasonably infer that this difference represents real and not chance variation.Google Scholar

49 When two variables have a correlation coefficient (product moment) of + .30 or – .30, about 10 percent of the score variance on one of the variables can be accounted for by the other variable (or, roughly speaking, about 10 percent of what one of the variables is measuring is also being measured by the other variable). The two cpq instruction indices whose scores were most substantially correlated within each of the four large course sections analyzed (correlations in the four sections ranged from – .30 to – .40) were the doctrine and problem-solving indices. These correlations indicate that students who perceive instructors as stressing the understanding of legal rules, doctrines, and procedures also tend to perceive the same instructors as stressing the teaching of legal information rather than the skills and strategies of problem solving.Google Scholar

50 At the .05 level of significance (via two-tailed t tests). With these correlation coefficients being calculated on the basis of 20 scores (the 20 course section mean scores) per variable, the magnitude of a correlation, to be significant at the .05 level, must be equal to or greater than .40.Google Scholar

51 The 11 pairs were: clarity/note review, clarity/note clarity, clarity/confidence, clarity/anticipation, note review/note clarity, note review/anticipation, note review/study hours, note review/notes on student contributions, reading/anticipation, note clarity/anticipation, and note clarity/notes on student contribution.Google Scholar

52 The four here were: clarity/note clarity, note review/note clarity, note review/anticipation, and note review/study hours.Google Scholar

53 Not only were the correlations between clarity and note clarity index scores higher than correlations between clarity and scores on the other indices, but also the correlations between clarity and note review scores were higher than those between clarity and reading index scores. These findings were true both of intrasection correlations and correlations based on student total raw scores.Google Scholar

54 In this regard, note that the range of course section mean scores on the confidence index, shown in table 2 supra, is both small relative to several other student indices and confined to the upper portion of the available score range on the confidence index.Google Scholar

55 Pp. 493–94 supra.Google Scholar

56 Our data suggest that some of these small-large section differences also hold for course segments in which small and large sections were taught by different instructors. E.g., the data suggest that, even when small and large section instructors have somewhat different learning objectives and use somewhat different teaching styles, students in the small section tend to report slightly greater certainty about what they know and are supposed to be learning. They also report themselves slightly more diligent in preparation for class. In each of the five one-semester course segments in which the same instructor taught both sections, students in the small section earned slightly higher course grades. These differences were not, however, statistically significant.Google Scholar

57 This line of reasoning should not be read to imply that we believe that what students learn is largely determined by what students perceive instruction to be rather than by the instructional methods actually employed. Unfortunately the pilot study did not provide us with the opportunity to assess the relative merits of varied instructional methods applied to the same subject matters. This latter assessment would require a large-scale study involving numbers of classes taught by varied methods within each of the distinctive (in terms of characteristic kinds of information and skills) areas of the law.Google Scholar

58 lsat scores are significantly correlated with OPAS among the byu students in this study: the correlation between the two variables is + .24. lsat scores are also correlated with scores on some of the cpq instruction indices. Thus the column of partial correlations shows gpa-instruction index correlations with ability differences among students statistically eliminated (to the extent that they appear to affect the gpa-instruction index correlations).Google Scholar

The correlations in table 5 are not due to differences between course sections in the grade averages set by different instructors. By law school policy, grade averages in each of the one-semester course segments were to be set as close to 75 as possible. In practice, the differences between mean grades in the small and large sections of the same one-semester course segment also were very small.Google Scholar

59 The thoughtful reader, at this point, is likely to say: “Of course this must be true, any other finding would be absurd.” In reply I would say only that sometimes research data support the “obvious,” and sometimes they do not.Google Scholar

60 E.g., suppose a student had a first-year opa of 76. Suppose further that the student earned a grade of 78 in the small section of contracts I. Then, for the purpose of the analysis we are discussing here, the student's grade in contract law I would be + 2.00 (78 – 76).Google Scholar

61 These analyses examined relationships between grade and reaction variations of individual students for different courses. Variations of individual students tended to be small relative to variations among students on the variables under study. We decided therefore to focus analyses on large sections, the larger number of students permitting real but weak relationships to be distinguished from random fluctuations in our data. Had strong and statistically significant relationships emerged in the large sections, we would have checked for corresponding relationships in the small sections.Google Scholar

62 This suggestion is reinforced to some extent by personality data indicating that students who did relatively well in several of these course sections tended to describe themselves as relatively “practical” as opposed to “abstract” in problem-solving orientation and (more relevant) as relatively low in “independence of judgment,” which indicates these students might have a greater tendency than other students to look to the instruaor for answers and issue resolution and try to find these things even when the instructor was not providing them.Google Scholar

63 For example, item 4 on the instructor's form of the cpq read as follows. (Change from the student form of the cpq is italicized and the words on the student form that replace the italicized words on the instructor form are shown in brackets immediately following the italicized words.) Other items, such as the three items on the black mark index and the item that constitutes the organizability index are identical in the two forms. The instructor form of the cpq is reproduced as appendix B of this article.Google Scholar

64 Fewer than 20 sections in some cases because of missing responses or written instructor responses that could not be given points corresponding to our scoring schemes for the various indices.Google Scholar

65 The one-item conscript index was not included on the cpq form completed by instructors because we felt the student perceptions (of the extent to which instructors conscripted students as discussion participants) were quite accurate (based in most cases on both their recollections of in-class experience and usually on explicit policy stated by the instructor). We were also trying to keep the instructor form of the cpq as brief as possible and felt the conscript item could be sacrificed at little cost to the pilot study.Google Scholar

66 An initial hypothesis was that the indices that yielded the similar rankings might require or elicit more objective judgments than the other indices. Our re-examination of the indices do not reveal (to us—the reader may draw a different judgment) a systematic difference between the two sets of indices in terms of objectivity.Google Scholar

67 See appendix B for instructor form items relevant to student indices.Google Scholar

68 For the exact form of item 34 on the student form, see appendix A infra. For its form on the instructor form, see appendix B infra.Google Scholar

69 For the full item, see appendix A infra.Google Scholar

70 For the full item, see appendix B infra.Google Scholar

71 For the full item on the student form, see appendix A infra. For the full item on the instructor form, see appendix B infra.Google Scholar

72 In nearly all classes, the instructors had to rely on observed student behaviors, student questions and answers, and information they gained from conversations with students outside of class to form their judgments. In only 2 of the 20 first-year course sections were mid-semester examinations or quizzes given to provide students and instructor with interim assessments of student learning.Google Scholar

73 The reader should keep in mind here that examination performance is a direct measure of learning and only an indirect measure of instruction. Further, instructor behaviors and, more generally, the events in the classroom, determine both how and what students learn only through the student's perception of those classroom events. Thus, subsequent research might reveal that real differences among classes in instructional methods, by providing larger and, between course sections, more uniform perceptual variation among students that parallel some of the dimensions of within-section perceptual variation, relate to learning differences in ways that parallel the relationships we found within course sections.Google Scholar

74 With respect to this third key, it is not clear to what extent a student's awareness of cues can be modified by an act of will. Can persuasion result in more accurate perception of instructor cues or is accuracy of cue perception relatively fixed by the processes that have formed the student's character? I suspect the answer is that changeability in perceptual accuracy is a matter of degree. At the very least, informing students that instructors do use classroom events and course materials in distinctive ways and that they do provide cues in the classroom, sometimes clear but sometimes subtle (the instructor may not be aware of the cues), may help rather than harm students.Google Scholar

75 Our data indicate that less successful students may either (a) see concepts as more fixed than the instructor views them or (b) interpret instructor acknowledgments of ambiguity as indications that the instructor allows a wider latitude of interpretation than the instructor does, in fact, allow. Instructor comments on student examination answers indicate these two forms of student misperceptions: some student answers are criticized because they did not examine relevant alternative resolutions of issues; others were criticized because they ranged too widely, developing unproductive lines of thought without coming to grips with the most plausible resolutions of the issues.Google Scholar

76 No study published to date has directly compared correlations between law school grades and undergraduate achievement plus aptitude measures with correlations between the same law school grades and study measures. However, a study by Edward L. Kimball, Larry C. Farmer, & D. Glade Monson (Ability, Effort, and Performance Among First-Year Law Students at Brigham Young University) on the effect of study time on examination performance at byu will appear in the next issue of the Research Journal. Related studies cover some of the ground. Data on correlations between first-year law school opa and lsat scores plus undergraduate gpa can be found in W. B. Schrader & Marjorie Olsen, The Law School Admission Test as a Predictor of Law School Grades, in Law School Admission Council, Reports of LSAC Sponsored Research, Vol. I, 1949–1969, at 9 (Princeton, N.J.: Law School Admission Council, 1976), and Lunneborg, Clifford E. & Lunneborg, Patricia W., Relations of Background Characteristics to Success in the First Year of Law School, 18 J. Legal Educ. 425 (1966). Studies such as these indicate a wide range of multiple correlations predicting first-year law school OPA from lsat scores and undergraduate OPA. The bulk of reported multiple correlations lie in the range + .40 to + .60. In comparison, multiple correlations between first-year law school OPA and study behaviors as measured by the cpq cluster around + .25. Loftman obtained a similar figure in his analyses of study variables predicting grades in individual law courses. Supra note 6, at 439.Google Scholar

77 Supra note 6, at 448.Google Scholar

78 The questions on Loftman's questionnaires are reproduced as appendix D of his article. Id. at 459–72.Google Scholar

79 P. 477 supra.Google Scholar

80 Accounting in part for the substantial correlations between undergraduate and law school grades found by researchers such as Lunneborg & Lunneborg, supra note 76, at 435.Google Scholar

81 One could go on to argue that examination performance depends on both the crude dimensions and the fine detail of studying, so that a course grade is a measure of the effectiveness of study techniques analyzed in more detail than captured by the cpq and related instruments. In this case, undergraduate grades could be considered substitutes for more detailed measures of study methods and, therefore, should be more strongly correlated with law course grades than measures derived from the cruder study habits and techniques measures.Google Scholar

82 I noted earlier in this paper that sensitivity to differences among instructors in the use they made of classroom-generated material, in their tolerance of ambiguity of legal concepts and doctrine, in their stress on problem-solving skills development vs. the learning of specific definitions and “accepted” resolutions of legal issues, and so forth, did contribute to earning high first-year law school gp AS.Google Scholar

83 For a discussion of the general problems of measuring aptitudes and achievement and of the practical problems in distinguishing the two concepts, see one of the books on psychological testing. Brief and thoughtful discussions can be found in Anne Anastasi, Psychological Testing (4th ed. New York: Macmillan, 1976), and Lee Cronbach, Essentials of Psychological Testing (3d ed. New York: Harper & Row, 1970).Google Scholar