Chapter 10 Assessment Practices in Higher ...

0 downloads 0 Views 136KB Size Report
Assessment Practices in Higher Education in Brazil from the Students' Point of View ..... instructor's grading (Falchikov & Goldfinch, 2000), then it seems a likely ...
Chapter 10 Assessment Practices in Higher Education in Brazil from the Students’ Point of View

Daniel A. S. Matos Universidade Federal de Minas Gerais / Centro Universitário UNA, University of Florida Sergio D. Cirino Universidade Federal de Minas Gerais, West Virginia University and Gavin T. L. Brown Hong Kong Institute of Education

We started our studies into Brazilian student perspectives of assessment by doing research into student views of the classroom learning environment. We investigated secondary students’ perceptions of their science classroom learning environments using translated psychometric scales drawn from the Teacher Communication Behavior Questionnaire (TCBQ) (She & Fisher, 2000). In our study, we administered a Portuguese version of the TCBQ to a sample of 414 9th graders. Analyses of variance indicated that boys rather than girls perceived their teachers as giving more encouragement and praise. Students were also found to perceive their female teachers as displaying more understanding and friendly behaviors. Furthermore, biology teachers were more positively perceived than physics teachers (Matos, 2006). As a result of meeting at the International Test Commission Conference in Belgium in 2006, we realized that the Brazilian research into classroom learning environments had some

synergy with the work in New Zealand into students’ conceptions of assessment. The synergy focused on the topic of within classroom inter-personal relations and the effect of those relations on student outcomes. Researchers from the classroom learning environment area have shown associations between students’ cognitive and affective learning outcomes and classroom environment perceptions (Fraser, 2002). Our early study found that Brazilian secondary private school students perceived their teachers more favorably than did public school ones, supporting findings of previous studies on classroom learning environment research (Matos, 2006). In Brazil’s National Evaluation System of Basic Education (SAEB), students are evaluated at the end of the elementary, middle and high school education with a standard test. In SAEB, the mean academic achievement scores of private school students are generally higher than of public schools students (INEP, 2005). While demographic variables contribute significantly to these differences, it was considered possible that differences in student perspectives might also play a role in differential attainment. In the New Zealand research into students’ conceptions of assessment, two related scales (i.e., personal enjoyment and classroom benefit) were found as part of students’ conceptions of assessment. These two scales belong to a major construct defined as affect/benefit (i.e., assessment has an affective positive impact on students). Positive emotional responses towards assessment such as motivation of the classmates and assessment as enjoyable experiences are included in this conception of assessment (Brown, Irving, Peterson, & Hirschfeld, 2009). Early studies showed that (1) students had low levels of personal enjoyment of assessment, (2) these factors predicted lower academic performance in reading and mathematics (Brown & Hirschfeld, 2007, 2008), and (3) these factors had statistically invariant relations to reading performance across sex, age, and ethnicity of students (Hirschfeld & Brown, 2009). Although one could

expect that positive personal or social enjoyment of assessment would raise academic performance, it appears that an affective conception of assessment is maladaptive. Brown and Hirschfeld (2008) suggested that students who prioritised affect probably did not take assessment seriously and thus did not pay attention or make sufficient effort to do well. In a subsequent study, Brown et al. (2009) found that the personal enjoyment factor negatively predicted testlike, teacher-controlled assessment practices (i.e., testing was not enjoyable). Moreover, the same study found that the classroom benefit factor positively predicted interactive-informal practices which were also predicted by the factor that assessment was ignored. A third study (Brown, Irving, & Peterson, 2008; chapter x in this volume), showed the same pattern of relations between personal enjoyment and classroom benefit on assessment definitions and showed that informal-interactive practices were statistically unrelated to academic performance, while test-like practices predicted increased academic performance. Thus, it would appear that among New Zealand students the effect of emphasizing positive personal and classroom affect is fundamentally maladaptive towards academic performance. Brown et al. (chapter x this volume) have argued that self-regulating and attributional theories explain how personal and social affect act in a maladaptive way. From this shared interest in the role of classroom relations on academic performance has come the first author’s doctoral dissertation research into the conceptions of Brazilian university students concerning assessment. This chapter reports the results of a first study done as preparation for a large-scale survey of Brazilian university students responses to a Brazilian version (B-SCoA) of the 6th version of the Students’ Conceptions of Assessment (SCoA) inventory (Brown, 2003). The term conceptions is used to refer to the mental representations people have of the purposes and nature of complex systems; these representations include beliefs

or understandings about the system as well as their attitudes towards the system or its various parts (see Brown, 2008 for a fuller discussion of conceptions). It is generally agreed that assessment has three major purposes; that is, improve teaching and learning, hold schools and teachers accountable for student learning, hold individual students’ accountable for learning, and a rejection of all purposes (i.e., assessment is irrelevant) also exists (see Brown, 2008 for full discussion). New Zealand investigations with high school students (Brown & Hirschfeld, 2008; Brown et al., 2009; Brown, Irving, & Peterson, 2008) into how they conceive of assessment have identified two additional purposes (i.e., assessment has an affective impact or emotional benefit and assessment reflects external factors such as the future or the school). It is clear that students’ expectations, preferences, perceptions, and evaluations of the characteristics of assessments (e.g., fairness, authenticity, and formats) affect both positively and negatively the approaches students take to university learning (Struyven, Dochy, & Janssens, 2005). While students' purposes for assessment might be inferred from such research, explicit attention to the goals or intentions students have for assessment has been less well in investigated. Two studies with university students have looked at how students' conceptions of assessment influenced effort and learning strategies. The personal enjoyment factor negatively predicted the range of learning strategies used by German psychology students (Hirschfeld & von Brachel, 2008), while jointly the personal enjoyment and classroom benefit factors negatively predicted self-reported effort by American university students on a low stakes examination (Wise & Cotten, 2008 and chapter x in this volume). Hence, students’ conceptions of assessment (i.e., their beliefs about and attitudes towards the multiple purposes of assessment) are especially important because they appear to influence student practices and outcomes (Entwistle & Entwistle, 1991).

Previous reviews have not included research published in or from Brazil; hence, this review adds to the international understanding of how university students conceive of assessment's purposes. This chapter reviews the empirical studies into the experiences of assessment among higher education students in Brazilian literature. Then the results of a survey (N=702) into Brazilian tertiary students’ definitions of assessment are reported. The guiding question for the survey was: When students think about assessment, which types of assessment activities come to their minds? The goals of this study were to (1) overview Brazilian university students’ conceptions of assessment, (2) establish the underlying dimensions shaping Brazilian university students’ definitions of assessment, and (3) generate hypotheses about how Brazilian university students might respond to the B-SCoA. Brazilian University Students’ Perspectives of Assessment: Review of Studies In the last few years, research about assessment has received explicit attention in Brazil. For example, the literature includes studies about: assessment of the quality of the Educational Brazilian System (fundamental, secondary, and tertiary levels) through standardized tests (Verhine, Dantas & Soares, 2006); assessment of social and educational projects (Dória & Tubino, 2006); assessment of public policy towards the educational system (D. B. Souza & Vasconcelos, 2006); assessment of classroom learning environment (Matos, Cirino & W. L. Leite, 2008); assessment of educational innovations (Borges, Gonçalves & Cunha 2003). However, little research has considered the perspectives of test-takers or test-users. Vasconcellos, Oliveira, and Berbel (2006) administered a questionnaire with open-ended questions to 428 students from 14 undergraduate courses within a public university. Students were asked to report their positive and negative assessment experiences. Considering students’ answers about their positive assessment experiences, the researchers selected and interviewed 48

teachers nominated by students as good evaluators. Vasconcellos et al. (2006) also reported three major categories by doing a content analysis of the teacher interviews: 1) personal experiences: teachers have a tendency to repeat past assessment practices. In fact, assessment experiences that they had experienced as students played a significant role in teachers’ practices of assessment. Teachers also indicated that teacher-student relationships both in and out of the classroom shaped the type of assessments they used; 2) self-evaluation and reflexive process on teaching practices: teachers perceived selfevaluation as a continuous process of the evaluators practice and thus, they monitored student reactions to various assessment practices and modified their practices in light of such feedback; 3) purposes of assessment practices: although many assessment procedures are available, teachers emphasized that the most important aspect was the evaluator’s intention in using a specific type of assessment. Whatever the type of practice selected, it must result in effective learning for the students. This improvementoriented conception was the dominant paradigm for highly-rated professors. However, it is apparent that the emphasis in this study was on the views of the teachers rather than those of the students. A search of the Brazilian literature was conducted. Brazilian databases (e.g., Scielo, Pepsic), and Brazilian Digital Library of Theses and Dissertations were searched online. We combined the following keywords by using both singular and plural: ‘student conception’ and ‘assessment’; ‘student conception’ and ‘evaluation’; ‘student perception’ and ‘assessment’; ‘student perception’ and ‘evaluation’; ‘student representation’ and ‘assessment’; ‘student

representation’ and ‘evaluation’; ‘student belief’ and ‘assessment’; ‘student belief’ and ‘evaluation’. We also searched international databases (Academic Search Premier, Mental Measurements Yearbook, Professional Development Collection, PsycINFO, Psychology and Behavioral Sciences Collection, and Research Starters - Education) by adding the word Brazil (e.g., ‘student conception’ and ‘assessment’ and ‘Brazil’). Relevant documents were sought and selected. Furthermore, sometimes we only found the abstracts of the studies. Thus, these studies were not included on this chapter because they did not meet our criteria (i.e., sufficient information to summarize the research results). Our review of the literature on students’ conceptions of assessment is classified by the number of conceptions reported (i.e., few or many), though it could be equally interesting to contrast studies by whether they were qualitative or quantitative. The literature review concludes with material on studies that reported student perspectives on types of assessment.

Studies with Few Conceptions Most studies identified just one or two conceptions among students. For example, Camargo (1997) used discourse analysis to interpret written reports from 390 students in an Education undergraduate course. Using a theoretical background based on the French philosopher Michel Foucault, the Russian semiotician Mikhail Bakhtin, and Social Representations Theory, Camargo analyzed how assessment was represented in students’ experiences of being evaluated during their lives (i.e., primary, secondary and tertiary levels). Camargo sought to explain the meaning contained in assessment situations as well as to identify the social-educational links. Students reported mostly negative (80%) assessment experiences, though positive experiences were reported. On the negative side, students associated assessment

with control, passiveness, strict rules, punishment, criticism, submission, bad examinations, and affective reactions. On the positive aspect, they linked assessment to a continuous process (formative assessment), diversity of the instruments of evaluation, feedback about the mistakes, and well prepared teachers. Both the emotional effect and student accountability conceptions were apparent in this study. Neves (2002) investigated the representations of the assessment of orality in the learning of English as a Foreign Language (EFL) at a Brazilian public university. Neves used an interdisciplinary approach, including theoretical background from discourse analysis (e.g., Michel Foucault, Michel Pêcheux), applied linguistics, and psychoanalysis. The sample consisted of 15 students and 4 teachers, who completed a questionnaire with 32 open-ended questions (e.g., What do native speakers think about your proficiency in English?). Traditional assessment practices in the learning of English as a Foreign Language on undergraduate courses such as fluency and pronunciation, communicative proficiency, and peer evaluation were examined. Students’ and teachers’ representations of assessment fell mainly into two categories: inclusion and exclusion. The former linked assessment to inclusive proceedings like the acceptance of mistakes. The latter category associated assessment to exclusive processors such as retention and standardized patterns of achievement. Hence, both improvement and student accountability conceptions were evidenced in this study. Cacione and N. A. Souza (2005) reported a case study of 10 students from 3rd and 4th year undergraduate courses in music at a public university. The analysis of documents, questionnaires, and semi-structured interviews demonstrated that students associated assessment with traditional instruments, such as examinations. Students’ conceptions were deeply linked to the student accountability conception (i.e., assessment for approving or retaining and measuring).

In contrast, only a few students reported experiences or conceptions connected with formative assessment (e.g., feedback). Pereira (2006) conducted semi-structured interviews and a survey with 57 students enrolled in an undergraduate course in Education. Students were asked to report: a definition of a “good evaluation”; important things that they have learnt about assessment for their career or job; and what teachers do with correct or wrong answers of the students. Pereira found that students expected quite different assessment methods than were being practiced by their teachers. The students wanted more than the traditional test-like assessment practices and showed a critical awareness of both theoretical and practical issues. Students perceived a gap between the theoretical recommendations and the university’s actual teaching practices. In this study, it would appear that assessment is conceived as improvement, but was overthrown by accountability emphases in practice. Mezzaroba (2000), in a case study of one public university, surveyed 26 pharmacy teachers and 26 pharmacy students and conducted semi-structured interviews with 4 teachers. Content analysis showed that students' conceptions of assessment were frequently associated with: educational measurement, classification of the students, and learning as memorization of the contents. In addition, students demonstrated negative feelings towards assessment (i.e., fear, anxiety, submission and passiveness). Thus, a strong student accountability conception with negative affect is shown in this study. Pellisson (2007) used an interpretive research approach to investigate students’ and teachers’ perceptions about assessment of English as a foreign language. The sample consisted of 75 students and two teachers from a Portuguese-English teacher’s training program in a private university. The interviews, class observations, and semi-structured questionnaires demonstrated

convergence between the perceptions of students and teachers. For example, both students and teachers had a perception of assessment as a continuous and formative process. Nevertheless, examination of the pedagogical practices showed that what really happened during the course was a series of summative evaluations. In fact, students rarely had contact with the theoretical background about assessment during their course. Moreover, students demonstrated negative perceptions connecting assessment to nervousness, bureaucracy, control, punishment, power, fear and pressure. Hence, an improvement orientation was seen in the students’ perceptions; however, accountability and negative affect were seen in actual practices. Gesser (1996) investigated the perceptions of institutional assessment of 63 people (i.e., students, teachers, and employees) at a private university. . Gesser also did an environmental analysis and interviews with individuals and groups. Results from the content analysis showed that perceptions of institutional assessment were frequently associated with: control, decision process, diagnosis, democratic and innovative process, and aspects of the institutional environment (e.g., improvement of the educational quality). This study seemed to hint at both improvement and institutional accountability conceptions of assessment.

Studies with Multiple Conceptions Few studies that examined multiple conceptions simultaneously could be found. An open-ended questionnaire was administered in one university in Portugal and two in Brazil (D. Leite, Santiago, Sarrico, C. L. Leite, & Polidori, 2006) to determine students' perceptions of assessments conducted to establish the quality of university education. A random sample of 466 students enrolled in 55 degrees participated; the number of courses in each institution was used to stratify the sample. Students were asked to respond to two questions: 1) What is your opinion

regarding the evaluation of universities? and 2) Do you think that evaluation of the university produces or will produce improvements in your degree? After rigorous, iterative, qualitative coding, seven main themes were identified: (1) evaluation and teachers; (2) constructive evaluation; (3) universities’ control and regulation by the state; (4) evaluation validation and legitimacy; (5) accountability; (6) discrepancy and comparison between universities; and (7) segmentation/fragmentation of assessment. Students perceived assessment as a legitimate exercise, resulting from a political decision, leading to improvement of the quality of universities. Also, students recognized that assessment was associated with institutional comparisons, a mechanism of control, regulation, monitoring, and possibility of standardization. In addition, students had multiple perceptions about the teaching dimension. They considered that universities’ assessment could be a source of feedback to the teachers and could improve both teaching methods and the academic success of the students. On the contrary, students also perceived that assessment did not produce positive effects on the teachers’ performance, especially without a punishment system. Thus, this study clearly identifies complex interrelationships between school accountability, improvement, and affect conceptions of assessment. Vieira (2006) used social representations theory to investigate the representations of 178 students from 8 undergraduate courses about portfolio assessment. A first survey sought to collect demographic data about students (e.g., gender, age, and social economic status) and responses to 19 multiple choice questions and one open-ended question (i.e., Write five words that come to your mind when you read the highlighted word: Portfolio). Students also had to choose the most important word and justify their answers. A second survey had nine multiple choice questions and three open-ended questions: 1) Considering your experience, what is a

portfolio? 2) If a friend of yours from another institution, who does not know about the Portfolio, asked you what you thought about this assessment procedure, what would you answer? 3) If you have more comments about the portfolio write them in the space provided. Vieira reported three dominant representations: (1) approval of evaluation accomplished by portfolio; (2) disapproval of portfolios; and (3), approval with reservations. Most students approved of the portfolio as an evaluation procedure because portfolios were used to minimize punishment, selection, and exclusion consequences associated with other assessment procedures. In contrast, a minority of students did not believe in portfolio as an evaluation technique, and even fewer students gave partial support to portfolio as an evaluation instrument. Vieira also found that educators still had to persuade students about the validity and usefulness of portfolio assessment. These results suggested that the students were aware both of the student accountability conception which they rejected and the irrelevance conception, in as much as they needed to be persuaded that the portfolio could be legitimate.

Types of Assessment Activities There are a number of Brazilian studies into student perspectives on types of assessment activities. Pellisson’s (2007) study of 75 students in a Portuguese-English teacher’s training program at a private university asked two relevant questions: 1) Which kinds of activities (questions) does your teacher use in examinations? and 2) Which types of assessment activities does your teacher use? In examinations, no one question format dominated. Writing a dialogue was used by just over half (51%), filling in the blanks by nearly half (49%), text comprehension questions by close-to-half (47%), as were multiple-choice and essay questions (each 45%). These are all formal, paper-and-pencil formats, and so it can only be assumed that examination

responses are dominated by these techniques. Assessment activities other than examinations were few and infrequent: text production (20%), seminars (7%), written summaries (5%), and debates or discussion in group in the classroom (3%). These tend to be formal but permit some interactive-informal practices. Pereira’s (2006) study also asked 57 students from an undergraduate course in Education to describe the most common procedures they were evaluated during the course. Three types of assessment dominated evaluation: “works” (including work in group, individual work, papers, research, written summaries) (93%); examinations (39%); and seminars (30%). In contrast, “selfevaluation” received the lowest percentage (1%). Hence, assessment practices were largely formal, though there was space for considerable informal practices. These two studies from the tertiary level showed a common emphasis on conventional evaluation methods and low use of interactive-informal assessment such as self-evaluation. In contrast, in two primary schools, 14 teachers were asked to respond to the question: Which kinds of assessment activities do you most frequently use with your students? (Machado, 2006). All teachers reported the use of two assessment activities (i.e., work in group and observation of students in classroom) and half reported the use of examinations. However, only the conventional evaluation practices (i.e., examinations) played a role in constructing students’ scores or grades, while the more common informal assessment practices played little part in constructing students’ scores. Hence, we can conclude that formal evaluation practices dominate classroom and university practices in Brazil and, while informal activities may take place, they carry little weight in assigning grades.

The Present Brazilian Study While few studies have managed to capture students multiple and interrelate to the conceptions of assessment, it should be evident from this brief review that a wide range of conceptions of assessment have been identified in the experiences and thinking of Brazilian university students. These include awareness of student accountability, school accountability, improvement, emotional effect, and irrelevance. The dominant conception appears to be a negative emotional reaction towards student accountability purposes and a minor, perhaps vain hope, motif of assessment for improvement. There is evidence that traditional forms of assessment practices are most strongly associated with the negative accountability purpose, while alternative more authentic assessment types may be related to the improvement theme. Further, there is evidence that formal evaluation practices dominate the practice landscape. Next we report the results of a survey study investigating how Brazilian university students define assessment in reference to a range of assessment practices. While this does not explore directly conceptions of assessment, the study is a useful precursor to such an explicit study.

Survey of Brazilian Students’ Definitions of Assessment It seems likely that conceptions of assessment (i.e., purposes or intentions) depend on the type of assessment being considered. If assessment means testing, then a different set of purposes may come to mind than if assessment were to mean interaction with the teacher. One way to control for this potential effect is to establish what the word assessment means to participants before they complete a survey inventory. In order to make this feasible within the context of

survey research, a list of assessment practices can be shown to participants before they respond to survey items about the nature and purpose of assessment. Previous studies have used this approach with teachers (Brown, 2002) and high school students (Brown, Irving, Peterson, & Hirschfeld, 2009). Brown et al. (2009) reported that students identified two major clusters of assessment practices: teacher-controlled, test-like assessments and interactive-informal assessments. The items in the teacher controlled, test-like assessment category were selected some three times more frequently than those in the interactive-informal category; this clearly indicated that students associated assessment with testlike practices. Interestingly, the same study found that the class benefit affective effect and the ignore assessment conceptions predicted the interactive-informal category, while the personal enjoyment effect and the teacher-improvement conceptions of assessment predicted (and negatively and positively respectively) the test-like category. The authors (Brown, et al., 2009, p. 108) concluded that students appeared to think ‘‘If assessment is controlled by the teacher, I won’t necessarily like it, but it will help them teach me better. The interactive assessments are good for class dynamics but I don’t need to pay much attention to them’’. This result shows that understanding how students define assessment sheds light on their conceptions of assessment. We hypothesized that tertiary Brazilian students would have conceptions in line with Brown’s et al. (2009) two groups of practices namely formal test-like and interactive-informal processes. A large-scale survey of Brazilian students’ responses to the Brazilian adaptation of the Students’ Conceptions of Assessment version VI (SCoA-VI) inventory was carried out in 2008. More than 700 Brazilian students (N=702; 206 males and 496 females; age M = 24.39 years; SD

= 5.42) from 15 undergraduate courses1 within 2 universities (a public and a private one) participated. Before completing the B-SCoA, students were asked to respond to the question: When you think of the word assessment, which kinds or types of assessment activities come to your mind? The list of activities presented to students, was derived from the list reported in Brown et al. (2009). However, five items were modified to better fit the Brazilian context - these are marked with an asterisk in Table 1. Students were allowed to choose up to12 different types of assessment activities, with each response being scored dichotomously (i.e., 1=selected, 0=not chosen). An exploratory multidimensional scaling (MDS) procedure (Stalans, 1995) was used to determine the underlying dimensions for the students' responses to the 12 types of assessment activities (Figure 1). MDS, by examining all pairs of responses, determines the relative proximity of elements to each other and constructs a spatial map of relative positions for each element. Elements that are close to each other are clearly similar in the minds of respondents. MDS is exploratory since analysts must interpret the spatial map and the organising dimensions underlying the relative position of elements. --------------------------Insert figure 1 about here ---------------------------The number of underlying dimensions determines the geometric shape of the spatial map; one dimension creates a line, two dimensions create a plane, and three dimensions create a cube. While it is possible to have more than four dimensions in an MDS solution, the number of items available for classification is a constraint on the number of dimensions possible. It is 1

Biological Sciences, Veterinary Medicine, Pharmacy, Occupational Therapy, Education, Psychology, Social Work, Civil Engineering, Physical Sciences, Business Administration, Architecture, Accounting, Economics, Foreign Affairs, Management.

recommended that there be at least 4 objects for each dimension; that is, if you want two dimensions you need at least eight objects and conversely if you have only 12 items you should not expect more than three dimensions. Indicators of good fit are when Kruskal’s stress is < .15 and when the coefficient of determination (R2), the proportion of variance explained by a dimensions, is >.90 (Stalans, 1995). MDS for two, three, and four dimensional solutions was computed using Euclidean distances on the 12 binary measures. Both the three and four dimensional solutions were rejected because of poor ratio of items to dimensions. The two dimensional solution had a Kruskal stress value of .08 and an R2 value of .97. Dimension 1 (the horizontal axis) was interpreted as the degree of formality of the assessment activities, while Dimension 2 (the vertical axis) was understood to reflect the locus of control for the various assessment activities (i.e., student or teacher). This two-dimensional analysis created a 2 x 2 categorical space: informal/student control; informal/teacher control; formal/student control; and formal/teacher control. Hence, items that fall completely within each quadrant should conform to the interaction of the two dimensions; while items that fall on an axis should be about halfway between the two ends of each dimension. Note that five elements (i.e., A, C, E, I, and J) had Dimension 2 values close to zero (+/-.10), whereas none of the Dimension 1 values were that close to zero. This suggests that participants had little difficulty distinguishing the formality dimension, and more difficulty assigning locus of control. ---------------------------------------Insert Table 1 about here -----------------------------------------

Formal/Teacher Control Items A, E, I, J, K fell into this cluster. These items were selected the most frequently, with Type K (the teacher administers essay examinations) being selected by nearly all participants 87%). The other formal, teacher-controlled assessments were selected by two-thirds to three-quarters of participants. Thus, the students’ responses showed an emphasis on conventional evaluation methods which utilize formal methods controlled by the teacher.

Informal/Teacher Control This quadrant describes informal assessment practices that were controlled by the teacher. Items G (teacher observation), B (self-assessment), and L (teacher scored conferencing) fell into this cluster and were selected by around one-third of students (30 to 37%). At first glance, item B ought to be a student-controlled activity rather than a teacher-controlled one. There are several possible explanations for the proximity of self-assessment to items L and G. First, it may simply be that students self-evaluate activity at the direction of the teacher (e.g., a written report or an oral discussion), meaning the teacher is in control of the process. Second, few Brazilian higher education instructors grade or score or make use of student self-evaluations, suggesting that the real evaluation is still teacher-controlled. Third, if self-assessment aims to approximate the instructor’s evaluation (Falchikov & Boud, 1989) then it should not be surprising to consider that self-assessment is still teacher-controlled. The location of item C (peer assessment) was problematic since the value on Dimension 2 was just less than zero. It is possible to consider that peer assessment is student-controlled; however, if peer-assessment in higher education functions like self-assessment (item B), because it is done at the direction of the instructor, if it is done at all, or attempts to approximate the

instructor’s grading (Falchikov & Goldfinch, 2000), then it seems a likely candidate for inclusion in this quadrant. The correlation of peer assessment to the three other members of this quadrant was statistically significant with an average r=.22 (SD=.04); whereas, the correlation to the one element of informal, student-controlled was only r=.14. Hence, we concluded that this item is an informal, teacher-controlled practice for most Brazilian university students.

Formal/Student Control This quadrant is constituted by two items that had no previous parallels in the New Zealand research. Item H (examinations in pairs) and item F (consultation permitted examination) were selected by nearly half the students and are clearly formal assessments in which students are permitted considerable control over what happens. Students are permitted to work together and consult each other even under examination conditions.

Informal/Student Control Only one item (i.e., D- teacher questioning in class) was in this quadrant and was selected by a quarter of participants. At first glance, this too appears to be an odd location for an assessment that is clearly initiated by the instructor. However, this location is logical if students determine how and to what extent any student participates in this activity. If the student does not wish to reply, that choice is within the student. If the student wants to exhibit a large or deep grasp of a topic, then such a response is also within the student’s control. The ability of a student to avoid participating or to dominate in-class discussion or questioning is not disputed. The informality of such activities is obvious; rarely does a professor grade or score this activity. More

typically, teachers ask questions out loud in class just to increase students’ participation or as a strategy to better explain a specific point. Thus, students have control of this practice.

Discussion This study found that two dimensions (i.e., formality and locus of control) and their interaction could explain how students defined assessment. Our review of the Brazilian literature led us to expect that assessment would be defined by conventional test-like evaluation methods. Likewise, we expected from Brown et al.’s (2009) work with secondary New Zealand students, responding to a similar data collection method, that we would find two clusters of assessment practices. These two expectations were supported by the first dimension of formality; there were clear formal test-like and interactive-informal practices. However, an additional dimension (i.e., locus of control) was needed to fully understand Brazilian students’ perspectives on assessment. Giving students control over the assessment process changes how assessment is experienced and understood. Hence, this study resulted in four groups of assessment practices: informal/student control; informal/teacher control; formal/student control; and formal/teacher control. This is a further refinement in our understanding of student perspectives of assessment. Nonetheless, in keeping with our hypotheses and the reviewed Brazilian studies, students associated assessment essentially with formal, test-like practices. Nine of 12 types of assessment activities fell into teacher-controlled quadrants. Even supposedly student-controlled practices (i.e., self and peer-assessments) were perceived as teacher-controlled practices. This dominance of teacher-controlled assessment may simply reflect students’ need or desire for strong guidance from their teachers (Peterson & Irving, 2008). Alternately, it could reflect students’ attempts to replicate teacher judgments (Falchikov & Boud, 1989; Falchikov & Goldfinch, 2000) or else it

may simply mean, since little use is made by teachers of informal assessments in creating students’ final grades, that students are accurately perceptive of the irrelevance of such assessment practices. Further analysis of the Brazilian SCoA data (to be completed) may explain why students do not really consider informal and student-controlled practices to be what they think of when they define assessment. From the literature review and from the New Zealand studies, we might expect the test-like practices to be predicted negatively by a personal enjoyment factor, while the interactive, student-controlled activities should be predicted by the classroom social benefit factor and perhaps also the irrelevance of assessment factor. If the New Zealand studies apply to Brazilian university students, we should expect the improvement conception of assessment to positively predict teacher-controlled and formal assessments. In other words, the current study and literature review lead the authors to predict very similar relations to the New Zealand secondary students in the Brazilian university population. Another value of this study is to show that assessment in the developing world, while it may be rather more formal than other contexts, engenders similar psychological processes to that reported in the developed world. Perhaps students’ understandings of assessment are somewhat more general and universal than previously indicated.

Acknowledgement Financial support from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) is acknowledged. The cooperation of the students and universities is appreciated.

References Borges, M. N., Gonçalves, M. C. N. S., & Cunha, F. M. (2003, December). Teaching and learning conceptions in Engineering Education: an innovative approach on Mathematics. European Journal of Engineering Education, 28(4), 523. Retrieved March 12, 2009, from Academic Search Premier database. Brown, G. T. L. (2002). Teachers’ conceptions of assessment. Unpublished doctoral dissertation, University of Auckland, Auckland, NZ. Brown, G. T. L. (2003). Students' conceptions of assessment (SCoA) inventory (Versions 1-6). Unpublished test. Auckland, NZ: University of Auckland. Brown, G. T. L. (2008). Conceptions of assessment: Understanding what assessment means to teachers and students. New York: Nova Science Publishers. Brown, G. T. L., & Hirschfeld, G. H. F. (2007). Students’ conceptions of assessment and mathematics achievement: Evidence for the power of self-regulation. Australian Journal of Educational and Developmental Psychology, 7, 63-74. Brown, G. T. L., & Hirschfeld, G. H. F. (2008, March). Students' conceptions of assessment: Links to outcomes. Assessment in Education: Principles, Policy & Practice, 15(1), 3-17. Retrieved March 9, 2009, doi:10.1080/09695940701876003 Brown, G. T. L., Irving, S. E., & Peterson, E. R. (2008, July). Beliefs that make a difference: Students’ conceptions of assessment and academic performance. Paper presented at the 6th Biennial Conference of the International Test Commission, Liverpool, UK. Brown, G. T. L., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. F. (2009). Use of interactiveinformal assessment practices: New Zealand secondary students’ conceptions of assessment. Learning & Instruction, 19(2), 97-111.

Cacione, C., & Souza, N. A. (2005). Assessment of the learning: revealing conceptions of undergraduate students in Music. Avaliação da aprendizagem: desvelando concepções de licenciandos em música. Proceedings of the Associação Nacional de pesquisa e PósGraduação em Música (ANPPOM), Brasil, 650-658. Camargo, A. L. C. (1997). The discourse about educational assessment from student point of view. O discurso sobre a avaliação escolar do ponto de vista do aluno. Rev. Fac. Educ., 23(1-2). Doria, C., & Tubino, M. J. G. (2006). Evaluation on the citizenship Mangueira Olympic Project. Avaliação da busca da cidadania pelo Projeto Olímpico da Mangueira. Ensaio: aval. pol. públ. Educ., 14(50), 77-90. Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree examinations: The student experience and its implications. Higher Education, 22, 205-227. Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: a meta-analysis. Review of Educational Research, 59(4), 395-430. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: a metaanalysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287322. Fraser, B. J. (2002) Learning environments research: yesterday, today and tomorrow. In Goh, S. C., & Khine, M. S. (Eds.) Studies in educational learning environments: An international perspective (pp. 1-26). River Edge, NJ: World Scientific. Gesser, V. (1996). Institutional evaluation of the university: what is the meaning for the members of an institution. Avaliação institucional da universidade: qual seu significado

para os membros de uma instituição. Unpublished master’s thesis, Pontifícia Universidade Católica de São Paulo, São Paulo, Brasil. Hirschfeld, G. H. F., & Brown, G. T. L. (2009). Students’ conceptions of assessment: Factorial and structural invariance of the SCoA across sex, age, and ethnicity. European Journal of Psychological Assessment, 25(1), 30-38. Hirschfeld, G. H. F., & von Brachel, R. (2008). Students’ conceptions of assessment predict learning strategy-use in higher education. Paper presented at the Biannual Conference of the International Test Commission (ITC). INEP. (2005). National Institute of Educational Research Anísio Teixeira. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. Results from the Brazil’s National Evaluation System of Basic Education (SAEB). Retrieved February 16, 2006, from http://www.inep.gov.br/basica/saeb Leite, D., Santiago, R., Sarrico, C., Leite, C., & Polidori, M. (2006, December). Students' perceptions on the influence of institutional evaluation on universities. Assessment & Evaluation in Higher Education, 31(6), 625-638. Retrieved March 7, 2009, doi:10.1080/02602930600760264 Machado, S. M. G. (2006). Conceptions and practices - the dilemma of the evaluation of learning: a case study of the evaluation practices of teachers from the state of Maranhão. Concepções e práticas - o dilema da avaliação da aprendizagem: um estudo de caso da prática avaliativa de professores da rede estadual de ensino do Maranhão. Unpublished master’s thesis, Universidade Federal do Maranhão, São Luís, Maranhão, Brasil. Matos, D. A. S. (2006). Students’ perceptions of science teachers’ communication behavior. A percepção dos alunos do comportamento comunicativo do professor de ciências.

Unpublished master’s thesis, Universidade Federal de Minas Gerais, Belo Horizonte, Minas Gerais, Brasil. Matos, D. A. S., Cirino, S. D., & Leite, W. L. (2008). Instruments for the evaluation of classroom learning environment: a literature review. Instrumentos de avaliação do ambiente de aprendizagem da sala de aula: uma revisão da literatura. Ensaio. Pesquisa em Educação em Ciências, 10, 1-18. Mezzaroba, L. (2000). Concepts of evaluation among Pharmacy and Biochemistry faculty and students at Universidade Estadual de Londrina, Paraná, Brazil. Concepções de Avaliação de Professores e Alunos de Farmácia e Bioquímica da Universidade Estadual de Londrina, Paraná. Revista Brasileira de Educação Médica, 24(3), 53-61. Neves, M. S. (2002). Discursive process and subjectivity: predominant voices on the evaluation of the orality in the learning of English as a Foreign Language in higher education. Processo discursivo e subjetividade: vozes preponderantes na avaliação da oralidade em língua estrangeira no ensino universitário. Unpublished doctoral dissertation, Universidade Estadual de Campinas, São Paulo, Brasil. Pellisson, J. A. (2007). Perceptions of two teachers of foreign language (English) and of their students about evaluation: implications for teacher’s pre-service education. Percepções de duas professoras de língua estrangeira (inglês) e de seus alunos sobre avaliação: implicações para a formação do professor. Unpublished master’s thesis, Universidade Estadual de Campinas, Campinas, São Paulo, Brasil. Pereira, M. S. F. (2006). Formation of teachers and evaluation: a study of the perception of the students of a course in Education. Formação de professores e avaliação: um estudo da

percepção dos alunos de um curso de pedagogia. Unpublished master’s thesis, Universidade Estadual de Campinas, Campinas, São Paulo, Brasil. Peterson, E. R. S., & Irving, E. (2008). Secondary school students’ conceptions of assessment and feedback. Learning & Instruction, 18(3), 238-250. She, H. C., & Fisher, D. (2000).The development of a questionnaire to describe science teacher communication behavior in Taiwan and Australia. Science Education, 84, 706-726. Souza, D. B., & Vasconcelos, M. C. C. (2006). The Municipal Councils of Education in Brazil: a balance of national references (1996 - 2002). Os Conselhos Municipais de Educação no Brasil: um balanço das referências nacionais (1996-2002). Ensaio: aval. pol. públ. Educ., 14(50), 39-56. Stalans, L. J. (1995). Multidimensional scaling. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and Understanding Multivariate Statistics (pp. 137-168). Washington, DC: APA. Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: a review. Assessment & Evaluation in Higher Education, 30(4), 325-341. Vasconcellos, M. M. M., Oliveira, C. C., & Berbel, N. A. N. (2006). The university teacher and appropriated evaluation practices in higher education: a student’s perspective. O professor e a boa prática avaliativa no ensino superior na perspectiva de estudantes. Interface Comunic, Saúde, Educ, 10(20), 443-456. Verhine, R. E., Dantas, L. M. V., & Soares, J. F. (2006). National Course Exam (Provão) to ENADE: a comparative analysis of national exams used in Brazilian High School. Do Provão ao ENADE: uma análise comparativa dos exames nacionais utilizados no Ensino Superior Brasileiro. Ensaio: aval. pol . públ. Educ., 14(52), 291-310.

Vieira, V. M. O. (2006). Social representations and educational assessment: what the portfolio reveals. Representações sociais e avaliação educacional: o que revela o portfolio. Unpublished doctoral dissertation, Pontifícia Universidade Católica de São Paulo, São Paulo, Brasil. Wise, S. L., & Cotten, M. R. (2008). The relationship between students’ conceptions of assessment and effort given to low-stakes university assessments. Paper presented at the Biannual Conference of the International Test Commission (ITC).

Fig. 1. Types of assessment activities. Euclidean distance model. Note. Letter E is obscured by J because their values are very similar.

TABLE 1 Types of assessment activities by MDS quadrants Dimension Dimension 1 2

Frequency

Percent

610

87

1.72

0.56

542

77

1.39

0.02

533

76

1.39

0.03

525

75

1.44

-0.04

448

64

0.86

-0.05

531.6

75.8

1.36

0.10

260 227

37 32

-1.25 -1.34

0.77 0.69

210

30

-1.39

0.47

92

13

-1.88

-0.02

197.25

28

-1.47

0.48

321

46

0.22

-0.96

318

45

0.15

-0.99

319.5

45.5

0.19

-0.98

D- The teacher asks me questions out loud in class

187

27

-1.32

-0.48

Mean

187

27

-1.32

-0.48

Types of assessment activities Formal, Teacher-controlled

*K- The teacher administers essay examinations. *A- The teacher grades or scores a work in group E- The teacher grades or marks or scores the written work I hand in *J- The teacher administers multiple-choice examinations. I- The teacher scores me on an in-class written essay Mean Informal, Teacher-controlled

G- The teacher observes me in class and judges my learning B- I score or evaluate my own performance L- The teacher scores my performance after meeting or conferencing with me about my work C- My class mates score or evaluate my performance Mean Formal, Student-controlled

*H- The teacher administers an examination in pairs *F- The teacher administers an examination with consultation allowed Mean Informal, Student-controlled

Note. * These items were modified for the Brazilian-SCoA from the SCoA-VI; N=702