From evidence-based practice to practice-based evidence - CiteSeerX

23 downloads 270 Views 94KB Size Report
Helen Simons is a Professor of Education at the School of Education, ..... experiment John commented: 'There's no validity to the results in research terms. I can't.
Research Papers in Education 18(4) December 2003, pp. 347–364

From evidence-based practice to practice-based evidence: the idea of situated generalisation Helen Simons, Saville Kushner, Keith Jones and David James1

ABSTRACT Governments across the world are seeking improvements in school performance. One avenue to improvement that has been widely promulgated is the reform of teaching through the development of evidence-based practice. This paper reports evaluation data from a national programme in England that sought to put teachers at the heart of the search for evidence on which improvements in practice could be based. The analysis indicates that in each stage of the process of generating and using evidence, practices came to be refined or adopted— whether for the individual teacher, peers or schools—only if they connected closely with the situation in which the evidence for improving practice arose. The paper suggests that the concept ‘situated generalisation’ contributes to our understanding of how teachers generate, validate and use research knowledge to improve professional practice. Keywords: Evidence-based practice; situated generalisation; teacher research; professional practice

INTRODUCTION In a review carried out for the UK Economic and Social Research Council of the capacity for research into teaching and learning, McIntyre and McIntyre (1999) characterised research by teachers and schools as ‘the most difficult sub-field [of educational research] to

Helen Simons is a Professor of Education at the School of Education, University of Southampton. Saville Kushner is a Professor of Education at the University of the West of England, Bristol. Keith Jones is a Lecturer in Education at the School of Education, University of Southampton. David James is a Reader in Education at the University of the West of England, Bristol

Research Papers in Education ISSN 0267-1522 print/ISSN 1470-1146 online © 2003 Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 10.1080/0267152032000176855

conceptualise and in which to see a clear way forward.’ They went on to note that there was little evidence about the possibilities or implications of school-based research. None the less, their report concluded that partnership between schools, LEAs and universities was essential and that resources and infrastructure were needed to support such work. This paper draws upon the evaluation of a national programme, which—though it pre-dated the review— might have been designed to address these specific concerns and to test out tripartite partnerships of the kind envisaged. The programme, entitled the School-Based Research Consortium Initiative (hereafter ‘the Programme’) took place in England from 1998 to 2001. It was sponsored in a public/private partnership between the Teacher Training Agency (TTA), a UK Government agency, and the Centre for British Teachers (CfBT), a private not-for-profit company. The aim of the Programme was to create local infrastructures of support and action for teachers to engage ‘in and with’ research. Those infrastructures were made up of consortia, consisting in each case of a small number of schools together with a university department of education and at least one local education authority (LEA). The timing of the Programme was significant. It was conceived in a period of intense interest in the reform of teaching through the use of research evidence to inform and improve teachers’ action in classrooms. The TTA itself had sponsored one of the more prominent and influential calls for evidence-based practice made by David Hargreaves in a well-publicised lecture in 1996. In this lecture, Hargreaves alleged a dysfunctional distance between research and practice which needed to be bridged: ‘It is this gap between researchers and practitioners which betrays the fatal flaw in educational research’ (Hargreaves, 1996). The period also saw publicity given to reports commissioned by the Office for Standards in Education (OfSTED) (Tooley and Darby, 1998) and the Department for Education and Employment (DfEE) (Hillage et al., 1998). Among other things, these reports were critical of the quality and relevance of educational research in relation to the information needs of teacher-practitioners. Notwithstanding Lagemann’s (2000) sobering account of the search for ways of producing high-quality research in education, these calls were distinctive by virtue of their intensity and their volume. They were also characterised by seeming to omit a consideration of some already well-established means for including and involving teachers in educational research, such as the ‘teacher-asresearcher’ movement and the action-research networks. (See, for example, Elliott, 1990, 1991; Altrichter, Posch and Somekh (1993), Hollingsworth and Sockett (1994) and Somekh (2000). The TTA became committed to a view of teaching as a profession that is guided by the systematic use of research evidence—in particular, classroom research (TTA, 1996a). At the time of setting up the Programme, the TTA claimed that ‘only a small if significant body of research findings directly focused on classroom practice and enhancing it; more is needed’ (TTA, 1996b). In their role as guaranteeing the recruitment, retention and improvement of the teaching force, the TTA entered the field as a sponsor of research in various forms directing funding at teachers. Though TTA-sponsored teacher research often involved university departments of education, the thrust appeared to many to be a shift in the locus of research from universities to schools, with parallels to an earlier policy thrust concerned with teacher education. One of the first research initiatives the TTA funded directly to schools was small-scale grants to teachers for classroom-based research (see TTA, 2000a). In addition, the TTA began to seek ways of influencing the focus of educational research, later claiming that:

348 Research Papers in Education Volume 18 Number 4

The TTA has also encouraged other research funders to give a higher priority to pedagogic research and this has been a key factor in the establishment of a new £12 million Teaching and Learning Research Programme administered by the Economic and Social Research Council. (TTA, 2000b.) This move of the TTA into the sponsorship of research by teachers was controversial. In 1997, Donald McIntyre had used his Presidential address to the British Educational Research Association (BERA) to argue the case for a distinction between ‘amateur’ and ‘professional’ researchers—and to suggest that professional research was best located in the university sector. In the early stages of its teacher research initiatives, the TTA came under attack for the quality of the outcomes of the work it had sponsored. One educational researcher reviewed several pieces of research carried out by teachers and concluded that ‘research and teaching are significantly different roles which depend on different types of knowledge, skill and disposition’ (Foster, 1999). The point has been reiterated in the context of professionalism in research (Hammersley, 1997) and in arguments for ‘capacity-building’ in educational research (Gorard, 2001). The Programme was thus born into something of a cauldron. It was, for the TTA, a highstakes initiative. Perhaps in response to this, the Agency assembled a Steering Group of prominent researchers and practitioners and moved immediately to commission an evaluation. This was followed by a larger commission for a more substantial evaluation of the impact of the Programme, aspects of which inform the present discussion.2 In the next section of this paper, we offer a brief description of the Programme. We then move on to note how both the TTA and the schools and teachers involved in the project conceived evidence-based practice and some of the adaptations that were made in practice. The bulk of the paper reports on some key examples of how the teachers involved in the Programme developed and came to utilise both their own research and that generated by their colleagues beyond the immediate locale. We suggest that a concept of ‘situated generalisation’ is helpful for making sense of this process. The paper concludes by re-examining what this experience of teacher-based research can tell us about the extension of research opportunities and skills development to teachers in the context of dominant ideas about evidence-based practice.

THE PROGRAMME IN OUTLINE In four sites in different parts of England, partnerships were developed between a group of schools, at least one LEA and a university. The aim was to promote teacher research within a local infrastructure of support. One of the subtexts of the Programme was the exploration of these relationships and their capacity to sustain teacher research in schools and support the wider diffusion of research experience across schools and teachers. In total, the Programme involved 29 schools, seven local education authorities (to varying degrees of engagement) and four universities. The four consortia were funded to support teachers to do and use research in order to improve their teaching and pupil learning—each in their own way, each within their own thematic priorities. The Programme fostered diversity. In one consortium, the focus was on the development of thinking skills. In another, it was school improvement through literacy and numeracy and later listening skills. Interests in a third clustered around pupil disaffection,

From evidence-based practice to practice-based evidence 349

while those in the fourth focused on the teaching of primary mathematics including conceptual modelling and mental mathematics. Over the three years that it ran the Programme spawned a considerable range and volume of research activities, including: peer observation of teaching; peer review of videos of teaching; interview-based study; surveys measuring such topics as rewards and sanctions in the classroom and parental involvement. In addition to well-developed teacher-university collaborations and some joint work with local education authorities, there were many examples of teacher-teacher collaboration (some of it between different schools), and also times when teachers and pupils worked together to devise, conduct or interpret research activity. In practice, the Programme created an environment in which it was possible to develop new research relationships across a range of partners, rather than merely transfer the locus of research to schools. However, while it is important to mention them, it is not the task of this paper to document the overall outcomes and impact of the Programme. We refer the reader to the Final Report produced by the evaluation (Simons, Kushner, Jones and James, 2002) and the final reports of the Consortia.3 Three aspects of teacher experience in the programme underpin the present discussion. The first is the overwhelming testimony of teachers that the value of the Programme for them was the rediscovery of their professional confidence in a climate of low trust accountability, characterised by constant monitoring, target setting and bureaucratic demands. The second is the growth of familiarity with research practices that teachers gained through working collaboratively with their peers, with pupils and with colleagues from the university. The third is how the process of research itself was necessarily situated in teachers’ own practices.

PROGRAMME CHANGE IN A CONTEXT The principal published aim of the Programme was ‘to explore how research and evidence can contribute to improving teaching and raising standards of achievement’. This was to be achieved through the operational objectives of: • encouraging teachers to engage with research and evidence about pupils’ achievements; for example, to use other people’s research to inform their practice and/or to participate actively in classroom research; • increasing the capacity for high quality, teacher-focused classroom research by supporting teacher involvement in the development of research proposals for external funding; • developing long term, medium scale data sets, which provide related quantitative data about what teachers and pupils do and how that affects pupil achievements.4 These statements, taken from the specification for tender for the evaluation of the Programme, indicated the main thrust of the TTA’s intention at the time, to improve teaching and raise standards of achievement through research. The suggested causal link between the use of evidence, the improvement of teaching and gains in pupil achievement was brought into question by the programme and by teachers in consortia—it was, in any event, something both the TTA and CfBT drew back from (the CfBT was initially attracted by the focus on raising achievement). Both placed high value on the impact the Programme accomplished in terms of the professional development and renewed enthusiasm of teachers.

350 Research Papers in Education Volume 18 Number 4

At its outset, the Programme sought to reconfigure both the production and the utilisation of research. Teachers using evidence of research elsewhere in a more systematic way, together with the ambition to develop ‘long-term medium scale data sets’, were suggestive of a new landscape in educational research—given how bleak a picture of educational research prevailed at the time5. It is important to note, too, that the aim of the Programme and the first operational objective also promoted ‘exploration’ and the encouragement of active participation in research, suggestive of something more than engaging teachers in action research. At issue in this new landscape was the control of research—much of which was to be passed to teachers and schools. As the Programme developed in practice, these objectives came to be modified or developed in a number of ways. The encouragement of teachers to engage in their own research was given added emphasis, signalled by an apparent increasing use of the phrase ‘engaging in and with research’. The replacement of the term evidence-based practice with evidence-informed practice 6 during the life of the Programme reinforced the role of teacher volition in interpretation and judgement in the use of research produced by others. It also facilitated teacher involvement and paralleled the use of the term by the DfEE. The recognition of quite different conceptions of research came to be acknowledged as participants in the Programme undertook a variety of activities and these were celebrated and legitimated. Already substantively diverse, the Programme saw considerable methodological broadening as it progressed. One prominent adjustment to the objectives was the decision of the TTA to scale back the original plans for collection of long-term medium-scale data sets. The TTA’s aim in funding Consortia to experiment with data sets was to start the process of enabling schools to build cumulative data about teachers and teaching to parallel such data about pupils and learning outcomes. It only became clear at a late stage of development quite how difficult this would be. The TTA Manager responsible for the Programme commented on this scaling back thus: I think it is fair to say, however, that whilst a good deal has been learned about ways of building cumulative teacher and teaching data, this relates as much to what not to do, or about preconditions for building a willingness to work with such data, as about what can be done. (Interview—November 2001.) As even this brief sketch illustrates, the Programme as a whole responded and adapted to its emerging experience. Such an adjustment to overall goals is common to innovatory programmes as they respond to emerging understanding and to what they discover to be the possibilities and constraints in the reality of practice. Over time much of the character of this Programme emerged and in response to the first year, which itself came to be regarded as an early developmental phase.

CHANGE ON THE GROUND A series of related modifications were apparent for teachers and others at local level. Within a complex structure involving a large number of people across four consortia, different people experienced the Programme in different ways. Teachers designated as ‘research co-ordinators’ in their schools played a key role, often in conjunction with the Head teacher (depending on

From evidence-based practice to practice-based evidence 351

the type and size of school and the perspective of the Head) in managing the projects initiated at school and classroom level. These research co-ordinators grappled with the practicalities of implementing the Programme objectives at the school and classroom level, while, at the same time, sharing experiences at consortium level with the HEI consortium co-ordinator and a ‘link officer’ working for the TTA. Depending on the consortium and the way it operated, it was Head teachers and research co-ordinators that had most contact with the TTA through management meetings and thereby were in the front line of mediating modifications to consortia plans. One consortium co-ordinator has written about how the schools’ research co-ordinators were dealing with a number of tensions. These included: . . . a marked shift in their own roles from teacher to teacher and researcher; the expectations about research held in their schools as communities and their own lack of experience as researchers. In addition, their focus was on the research they were doing, as the object that they hoped would lead to improved pupil learning outcomes. In fact, in year one, few of the research co-ordinators demonstrated a clear link between research activity and improved pupil learning. We would argue that this was because the focus was the research product as research report and not the informed understandings of practice that came from engagement with and in research. (Edwards, 1999.) Early in the Programme, the consortia appeared to go through what one co-ordinator described as a ‘confidence-building phase’ in which it was important for the teachers and schools involved to pursue their own particular interests. In time, a theme emerged within each consortium, reflecting the discovery of some common purpose and within which the teachers involved wanted to make a difference. According to consortium members, the TTA, as funder, had an early influence in encouraging a single theme for the research carried out within each consortium. The TTA, through their ‘link officer’ in each consortium and through the overall TTA management system were seen to especially favour consortia themes that focused on researching pedagogy and could demonstrate measurable improvements in pupil achievement. For example, one Consortium Co-ordinator spoke of ‘pressure from the TTA’, relayed through the first link officer, to ‘tighten up the management . . . to exercise greater control over the work of the teachers’. This was a result of the consortium in question initiating 23 school-based projects, some of which, the co-ordinator claimed, the TTA did not think were sufficiently pedagogical in their focus. In each consortium, the first year was characterised by within-school activities and crossconsortium work, much of which centred on the contributions by HE research mentors/ tutors in terms of research training workshops, providing access to research knowledge, theorising about the nature of the partnership and co-ordinating activities through the management groups. Subsequent years saw the emergence of cross-school themes and projects. These were characterised by teacher and school networking with a shift of responsibility for maintaining cross-consortium work to the school co-ordinators and in two consortia to the research co-ordinators in HE. The first year, too, was an opportunity to better define the relationship between consortia and the TTA and for each party to revise their aspirations in the light of how that relationship was developing and in the reality of practice. Hence, for example, the early intentions of one consortium to explore the curriculum dimensions of pupil disaffection and to give a high degree of autonomy to teachers in defining their own research questions were adapted to reflect what they experienced as the TTA’s

352 Research Papers in Education Volume 18 Number 4

concern that the prime focus of the Programme be on pedagogy. The consortium team saw this as an inevitable consequence of accountability pressures on the TTA and, said one HE coordinator, ‘we did, in a sense, capitulate in many respects to that anxiety’. According to the TTA Manager, the requirement to focus on pedagogy emerged in part from the fact that ‘the TTA had identified a significant under-representation of pedagogic research in the research submitted to the Research Assessment Exercise (RAE) prior to the initiative’ (see Cordingley et al., 2002). The TTA also saw pedagogic research as being ‘closer to teachers’ concerns and needs than, say, policy research and so was keen to help start to close the gap’ (Cordingley et al. 2002, p. 10). In addition, the TTA’s own remit concerned teachers and teaching and learning rather than the curriculum and, according to the TTA Manager, ‘it did not wish or feel it had the power to stray into the responsibilities of other national agencies’. Thus, the annual review of the Programme in 1998 specified the need to ‘strengthen the pedagogic focus of the specification for the next stage’ (TTA, 2000c, p. 3). Nevertheless, as the TTA Manager put it, in some cases, such as in relation to the curriculum, these parameters sometimes ‘made little sense to teachers on the ground’. The Programme annual review for 1998 also highlighted ‘the importance of recognising and tackling the pragmatic obstacles encountered by teachers in creative ways’ (TTA, 2000c, p. 3). At this point, the TTA also noted that: the way in which encouraging consortia to explore different approaches to the goals of the Initiative strengthens the capacity to learn from good practice and mistakes but dilutes the capacity to identify generalisable patterns . . . And how: the lack of full understanding of the original consortium proposals about data sets meant that there was a need for ‘more guidance’ (TTA, 2000c, p. 3). The early period was also an opportunity to rehearse the broader theoretical framework within which a consortium was located and to adapt the Programme aims to different contexts and audiences. For example, one consortium reported in the first annual Programme review that it was taking a focus on ‘matching specific pedagogy for specific purposes in numeracy and literacy’ (TTA, 2000c). In the next annual review a second principal theme was noted as ‘continuing professional development’ and, later, a third theme was added as ‘pupil motivation’ (TTA, 2000d). As the Programme progressed the phrase ‘engaging in and with research’ came to be used increasingly. According to the TTA Manager this was not a change of emphasis with regard to the first operational objective, rather more a shorthand way of expressing what the TTA had intended from the outset and which became increasingly clearer and more explicit as possibilities for teacher research were explored within each consortium. The two processes were often related. Many teachers did find that engaging in research themselves was a prerequisite for them of engaging in a meaningful way with research carried out by other people. The starting-point for this could be working in partnership with people from higher education or with colleague teacher researchers in research groups. However, such arrangements did not always to lead to teachers engaging in research themselves. The two processes were also related in the opposite direction. As the 1999 review of the Consortia makes clear, across all the schools involved:

From evidence-based practice to practice-based evidence 353

Few of these teachers would describe these activities as engaging in research; their perception is more that they are using evidence or research processes to improve their teaching and/or to enhance learning. They do not wish to analyse data on a larger scale or in a larger context or to write up their work but are interested to make a partial contribution to the larger picture. They are also positive about the work of their teacher colleagues who do choose to undertake research and are keen to know about and make use of their ‘products’. The initiative has described this process as engaging with rather than in research. It is clear that teachers do not have to do research to engage with it (TTA, 2000d, p. 19). The need for consortia to refine further their focus was repeated in the second annual review, which listed ‘sharpening the pedagogical focus of the research’ as a key issue for further development (TTA, 2000d). Other issues identified at this time included ‘generating significant, yet immediately useful, data which has the potential to raise standards and improve teaching’ and ‘balancing individual consortium priorities with the need to draw meaningful conclusions about consortia arrangements as a whole’. With the review came the requirement for consortia to set targets, agreed with the TTA, to be met in the coming year. With reference to the second operational objective, the degree of teacher involvement in securing external funding came to be reassessed by both teachers and higher education personnel in recognition of the complexity of the process, the additional burden placed on teachers and its perceived relevance to them. Early attempts to secure external project funding were unsuccessful and were replaced by an emphasis on in-house research largely resourced by the voluntary time of participants. This shift was accepted as a reality by the TTA, said one consortium member. One of the TTA Mangers suggested that it was never the intention that teachers actually write the proposals and if this was assumed it was a misreading of the Programme’s second goal, the implication of which was that the proposal should be discussed with teachers and be seen to reflect their interests. Similarly, the emphasis on the development of data sets (the third objective) was reduced owing, once more, to the difficulty of translating it into concrete reality. As noted in the previous section, in terms of developing long term, medium scale data sets that provide related quantitative data about what teachers and pupils do and how that affects pupil achievements, the consortia showed that this type of data collection is both complex and experimental. The TTA’s second annual review reported that: There is a long way to go before teachers will have a reputable bank of data available nationally on which they can draw for their professional development in the same way they can draw on pupil data now. One of the issues that the consortia are wrestling with is the need to find ways of collecting such data that are both useful at the point of collection and create a lasting and usable resource. . . . Each consortium has interpreted and carried out the data collection process differently and there is still much work to be done to achieve a consensus around effective types and use of data about teachers and teaching (TTA, 2000d).

TEACHERS GENERATING AND USING EVIDENCE As we have seen, the aims and objectives of the Programme, taken in context, could be interpreted as located in the mainstream of evidence-based practice (informed by the

354 Research Papers in Education Volume 18 Number 4

‘restorationism’ mentioned in note 5). Yet in practice, research activity within the Programme did not lead to the type of generalisation one would expect to flow from this. In practice, teachers developed their own processes for the use of evidence generated by themselves and by their colleagues. We now trace some of the ways in which this occurred. A number of teachers involved in the Programme expressed a concern that the research they conducted might be judged against conventional criteria in professional research. In particular, they were concerned that the research they produced was not characterised by classical experimental or quasi-experimental design and did not lead to outcomes that could be expressed using the conventions of statistical significance. In this sense, they were accommodating to the dominant model in currency that was being advocated for evidencebased practice. It is also likely that they will have had in mind the kinds of ideas in general circulation about what constitutes good or acceptable social science practice7. An awareness of this problem was particularly marked amongst (but by no means confined to) teachers with mathematics, natural science and similar backgrounds. For example, in one school, a teacher with a background in psychology expressed concerns that, while she and her colleagues in a school had used a common observation schedule, she felt there was a lack of reliability in the data they had collected in this way, as they had not really agreed on interpretations of observations. In another school, the concerns were more general with the research coordinator saying that, initially, while getting involved in research was appealing, there was trepidation about what constituted ‘research’: ‘research is what is done to people, evidence has to be rigorous—how will it work for us?’. For another teacher there was the feeling that what he was doing (engaging in conventional action research using collegial observation to analyse his pedagogy) was not research—‘because there’s no statistical evidence’. These and similar concerns were held across the Consortia. There were also some shared anxieties following apparently hostile receptions to a teacher’s research reported at educational conferences. One teacher attending an annual conference of the Classroom Action Research Network found his presentation of his consortium’s research received critically by professional researchers, and he resolved never to attend a research conference again. These were examples of a core dilemma for teachers participating in the Programme. Their responses seemed to take two main forms. The first, and most common, was to redefine the research activities they were engaged in as ‘professional development’ to avoid potential controversy over claims to research validity. For example, here are two teachers talking about a process in which one supports the other in reviewing and analysing their pedagogical practices: Teacher 1: . . . it’s a teacher training tool as much as anything else. It does and has changed how you approach what you’re doing. So I think it’s as valuable for that and the changes . . . it’s made us think about our practice. Teacher 2:

If we were to say what’s the most significant thing, the professional development of teachers would have to come very high up. Now, could that have happened without the research . . . and research by what we think is research?

Evaluator:

Which is what?

Teacher 2: Well the research was actually looking to see if we could make a change, if we could actually change teacher’s questioning techniques, and if we could get students to give longer answers . . . and we tried to keep it quite narrowly focused.

From evidence-based practice to practice-based evidence 355

The second form of response, particularly strong in one of the four consortia, is suggestive of a different relationship to the field of educational research, and represents more of an engagement with fault-lines in that field. Here teachers were more consciously avoiding the validity canon: whilst they perceived that there was an expectation upon them to produce a particular kind of evidence, they had a rationale for not doing so. They did not wish to generate what they characterised as ‘nomothetic’ data (i.e. that which would allow statements that were law-like and meaningful across contexts). Instead, they had recognised that they wished to deal with ‘ideographic’ data (i.e. that which would remain meaningful within single contexts and might offer some insights for other contexts). This shift allowed a different concept of validity to be adopted, one that was more consonant, in their eyes, with the realities of professional life. These teachers were interested in research as a systematic and transparent process, but also insistent about the situationally-bounded nature of their findings: as one put it ‘what is credible for these pupils in this classroom in this school in this city’. With this concept of validity, teachers (and ‘new researchers’ from the university with whom they collaborated) felt themselves to be in a safer environment to conduct classroom research. Focusing on the particular in this way did not prevent theorising and generalisation. The Consortia still worked with the notion of ‘evidence’ for certain courses of action, but it simply took a different form. What the evaluation observed was essentially a three-stage process of generalising from individual data, each stage testing out relevance (to the self, to colleagues and, eventually, to the consortium as a whole). It may be helpful to refer to these stages as the personal, the collegial and the collective, respectively. There was some evidence that these were stages of engagement in that shifts between them followed personal experiment and growing confidence. Stage One: the personal: Here teachers focused on their own understanding. They gathered data (often in relation to a specific issue identified from practice) and analysed it, reflecting on classroom practice. They arrived at generalisations, the ostensible relevance of which was limited to their own practice and their own teaching situation. Several of these points are illustrated in the account below that describes a teacher’s small piece of action research and what he concluded about it. John was revising with a class by working through an examination question about the application of a certain formula. The class learned the formula but then failed to understand it when they saw it again in the context of a practical example. He reworked the question 6 weeks later combining the practical example with the theory and the success rate on the test rose significantly. John was concerned about interpreting the results of this small experiment which appeared to show a causal basis of raised achievement—he was well aware that 6 weeks had intervened and that there may have been many intervening variables: For example, some pupils may have read-up in the meantime or they may have learned something similar in another lesson. Of this experiment John commented: ‘There’s no validity to the results in research terms. I can’t say that presenting the question in this way will improve pupil achievement . . . I can’t, because there are too many variables, and none of them were controlled’. However, John is confident about the value of the experience: ‘it’s the process of thinking about what the conceptual difficulties are for the children which is probably more important than actually what you’ve done’.

356 Research Papers in Education Volume 18 Number 4

Here, generalisation takes the form of an insight, which is significant for the teacher in relation to a fundamental aspect of his work with learners. The insight looks set to resonate with practice from here on. Stage Two: the collegial: In this stage, research is designed, conducted and analysed in a group setting in which the individual teacher has a degree of professional intimacy in relation to others. Typically, this would be a group of school staff, but sometimes also combined staff and pupils. The earlier stages of the consortia saw much work confined to these semi-private settings within schools. Here, colleagues would share research experiences, sometimes supported by their Head teacher. Here is a Head teacher talking: People are striving for improvement, but they’re doing it in their own isolated bits, and what [the Consortium] did was give you an opportunity, first of all to stand back, reflect, talk about what you’re doing, talk to other colleagues about what you’re doing, use professional language to do it, and then think ‘well, yes, tomorrow I’m going to try changing this bit and this bit and this bit’. . . . We found it very difficult to show improvements as regards to SATs8 but we have definitely showed a big improvement in the tests that we used which obviously weren’t standardised . . . There’s so much hidden benefit that you don’t actually have any evidence for but you just know that it’s had some sort of effect. A teacher in another consortium felt no need to measure the impact of her research, but was confident that a change in the way teachers related to each other would have some impact on children. She and her colleagues had used peer observation, because they had agreed on certain standardised practices: I do think it is important that we’ve actually as a team, we’ve worked as a team and standardized our approach [. . .] This works for us. I mean the children have benefited. Whether you could say that the raising of the standards is as a result of it, I don’t think anyone could say that definitely, because was that not going to happen anyway? But you can’t say it hasn’t. For her what was important was that ‘we’ve all been given a voice, really, and we listen to each other’s point of view, and have time to analyse our teaching methods’. In such cases, there is both individually relevant insight and a group process, which is strongly felt to be professionally (and pedagogically) beneficial. Importantly, teachers’ activities do produce evidence, but it is evidence that they find impossible to express in a conventional research codification. Stage Three: the collective: By this stage the group—sometimes the individual teacher— had developed sufficient confidence to work with others across the consortium in conducting research and sharing experience with those in other schools. Now the research assumed more of the character of evidence as commonly recognised, as the collectivity explores its relevance for a wider range of settings. The term ‘evidence’ here implies information which is indicative of action or which is incorporated into a judgement preceding action. The later stages of Programme activity saw much of this in within-consortium and occasionally across-consortium dissemination and discussion. In one consortium, for example, participating schools packaged their research experiences and each passed the package to another school for ‘trialling’. Here

From evidence-based practice to practice-based evidence 357

we hear a teacher talking about developing the confidence for such a move, and, indeed, speaking in a language of generalisation in a formal sense of handing on something intact to another group: I suppose the way I talk about teaching has probably changed—the fact that I’ve been able to be involved in writing research papers actually makes you think about, well how do I. . . . Its quite easy to talk about it, to show people the things you’ve done but to pull all that together, to articulate it for an audience who has no concept of what’s been going on, that makes you think about it, ‘Well, what has been going on? What are the models that work? What doesn’t work? What have we learnt from it? How does that fit in with the overall picture of what’s going on in education?’ But I wouldn’t have said that or been in that position a while ago. A second illustration comes from a conversation in a cross-school/university project team investigating teachers’ use of ‘rewards’ and ‘sanctions’ in the classroom. Here, too, there is a language of generalisation—but, importantly, generalisation still within the confines of the consortium’s activities. [Ken] designed a classroom observation schedule—and what he was keen to do was to observe closely classroom practice of effective teachers. Now that in itself has led to a great deal of debate in this group [general laughter and assent]. I was very uncomfortable about the way that that description—or—how we came to a judgement about who was an effective teacher, and I know that that’s been an issue in . . . Margaret Brown’s Numeracy project. And there’ve been some interesting things there where they thought they’d identified effective teachers and when they actually went in and observed them they weren’t effective in the same way. But what Ken has used—and he said he wanted me to emphasise this—he looked at teachers who’d been given Grade 1 OfSTED inspections within his school; he looked at achievements of pupils being taught by those particular teachers; and he was also obviously working from his own evaluation of who he saw in his school as being a highly successful teacher. What Ken has told us here in his message today is that he has done his classroom observation of 6 teachers who he identified—along those criteria—and he hasn’t had time to write it up. But what he is saying is that rewards and sanctions do not form part of the overt agenda of any of these six teachers. Okay. So one of the stories that is emerging here is that effective practitioners don’t need rewards and sanctions. The third ‘collective’ stage is one in which evidence is used to generalise in this broader sense. It achieved its strongest form at the consortium level in the Programme. One of the most distinctive features is that the information that is isolated as evidence does not come as a freestanding piece of information. In fact, as freestanding information it is almost meaningless and the prospects for generalising from it severely limited.9 Our analysis of evaluation data suggests there is a need to appreciate the ways in which the results of research activity were shared-foruse by teachers in the Programme. The process is one that we term ‘situated generalisation’. SITUATED GENERALISATION In each stage of the process we have sketched above, evidence for change to practice was accepted and/or adopted—whether for the individual teacher, peers or schools—only if

358 Research Papers in Education Volume 18 Number 4

there was a visible connection with the situation in which the improved practice arose. Generalisation takes place (that is to say, there is a process of transforming context-bound data into transferable evidence for other contexts), but only if the relationship to the given situation is sufficiently retained for others to recognise and connect through common problems and issues. However, this seems a necessary but not a sufficient condition for generalisation. Our evidence also suggests that generalisation is made possible by relational and situational factors, such as the confidence and trust that sharing teachers have in each other. In the Consortia there were what can be called ‘affirmative environments’, in which teachers, teams of teachers and teachers and university researchers worked together to build and sustain this confidence. The appeal to validity in this setting is not confined to methodological canon, still less to abstract notions of the weight of evidence. It is also grounded in professional agreement as to the usefulness or significance of particular insights, and in the trust and confidence that may be placed in the colleagues offering them. Evidence is still subjected to testing, but testing within these confidence limits, and against criteria agreed among colleagues. Generalisation lies at the heart of evidence-based practice, in that what turns experience located within a specific context into evidence is a function of both its communicability and its applicability and relevance in other contexts. The Cochrane Collaboration (the contemporary origin of evidence-based practice in medicine in the UK), and its international counterpart, the Campbell Collaboration (named after D. T. Campbell, one-time advocate of the ‘experimenting society’ who later developed a diametrically opposed view10 ) hold to a strongly traditional version of generalisation based on data from experimental controlled trials and the demonstration of statistical significance. From research carried out in this model, it is assumed safe to generalise and recommend the adoption of the treatment (in medicine) or ‘best practice’ (a term frequently used in other contexts) to all similar people, places or situations.11 The Evidence for Policy and Practice Information Coordinating Centre (EPPICentre) established at the University of London’s Institute of Education represents essentially the same approach to evidence and to generalisation, in that it conducts meta-analyses of research using strict protocols. This prevalent notion of systematic review, and the kinds of educational research it can and cannot accommodate, has been the subject of some recent debate (see, for example, Elliott, 2001a; Evans and Benefield, 2001; Hammersley, 2001b; Oakley, 2001). In education, the recent surge of interest in evidence-based practice, sparked off as it was by criticism that educational research had not thus far delivered sound evidence on which to base practice, is driven by the desire to have systematic, cumulative, even incontrovertible evidence to inform practice. For many people, confidence levels can only be safely assumed for such practice if experimental or quasi-experimental studies have been carried out. One of the most distinctive features of the example given above, and others like it documented in the evaluation, is that the insight that is isolated as evidence does not come as a freestanding piece of information. In this setting of professional learning, notions of a ‘community of practice’ (e.g. Lave and Wenger, 1991; Wenger, 1998) are more likely than metaphors of acquisition (or indeed orthodox representations of scientific knowledge dissemination) to help us understand how research-based insight can lead to improvements in teaching and the raising of achievement. Lave and Wenger’s (1998, p. 34) claims that, ‘any “power of abstraction” is thoroughly situated in the lives of persons and in the culture that makes it possible’ and that ‘the generality of any form of knowledge always lies in the power to negotiate the meaning of

From evidence-based practice to practice-based evidence 359

the past and future in constructing the meaning of present circumstances’, seem especially illuminating. This suggests that within teaching communities, the limitations of situatedness, especially the failure to transfer what has been learnt in one situation to another, may be transcended. Here the work of Greeno, Smith and Moore (1993) may be useful in that they have offered a situated account of transfer, attributing it to constraints or affordances that are common across situations. Their account resolves some of the problems traditionally associated with ‘transfer’ that have beset older, non-situated accounts, in that the constraints and affordances are not characteristics of either the environment or of the person, considered separately, but of the relationship between person and environment. Thus, we are suggesting the notion of ‘situated generalisation’ as a term that embodies how evidence within the teaching community does not come as freestanding pieces of knowledge but, in order to be viewed as evidence by the community, is bound up in the situation in which is generated and re-generated. Since adopting the term ‘situated generalisation’ we have discovered that Carraher et al., (1995) used the same phrase in the context of children learning mathematics. Their use of the term echoes the notion of ‘situated abstraction’ proposed by Noss and Hoyles (1996, p. 108). In both these cases, the terms are used in an attempt to describe how individual children can and do, when learning mathematics, abstract beyond the specificities of situations, yet remain to some extent tied to the objects and relationships within the situation, such as, in the case of mathematics, its tools, linguistic conventions and structures. That similar terms have arisen in mathematics education is propitious. In school mathematics, the curriculum has traditionally been designed on the assumption that mathematical knowledge is inherently general because it is abstract. Yet the ineffectiveness of traditional models of mathematics instruction based on the transmission of abstract mathematical knowledge is illustrated by the well-documented failure of many students in mathematics. This failure in mathematics resonates with the ‘failure’ of educational research to impact on pedagogic practice. In both situations, what is viewed as failure, may be an artefact of an inadequate conceptualisation of the nature of the relevant knowledge and how it is generated. The concept of ‘situated generalisation’ as developed in this paper also has something in common with that of ‘naturalistic generalisation’, proposed by Stake (1978) as an alternative to formalistic generalisation in the context of practical inquiry in complex social settings. While formalistic generalisation hands on formal, predictive propositions intact, naturalistic generalisation is ‘situated’ in that it relies on interpretation and judgement rather than rule or procedure to transfer knowledge from one context to another. It is a process of recognition and adaptation, on the basis of similarities and differences, to one’s own context. For example, a teacher might find, in an account of another’s classroom, something to which she/he can relate. This certainly happened within the experience of the consortia documented in the evaluation on which this paper draws, where teachers related to and could use evidence provided by colleagues in a way that they often could not relate to research produced by ‘experts’ outside the consortia. However, something else was happening in the way teachers adopted research undertaken by colleagues that departed from the idea of naturalistic generalisation. In naturalistic generalisation it is up to the recipients to generalise on the basis of similarities and differences to their own experience, i.e. there is an individual ‘recipient’ judgement. In the context of the consortia, a decision to adopt practices on the basis of evidence produced by teachers, and by teachers and researchers working together, was not simply individual. It was commonly a

360 Research Papers in Education Volume 18 Number 4

process of collective judgement and shared confidence in the findings that had already been demonstrated through the process of collectively researching and analysing the data. In some instances, this was clearly underpinned by confidence in colleagues’ research and its relevance. From evidence amassed during this evaluation, these are vital processes in the production of ‘situated generalisations’ that impact on pedagogic practice.

CONCLUDING COMMENTS In the context of evidence-based practice, the concept of ‘situated generalisation’ reminds us of three important factors that need to be borne in mind when exhorting teachers to use ‘best practice’ from the evidence of others. First, that teachers need to interpret and re-interpret what evidence means for them in the precise situation in which they are teaching. Second, that presentation of evidence needs to remain closely connected to the situation in which it arose, not abstracted from it. Third, the collective interpretation and analysis of data by peers seems to act as a validity filter for acceptance in practice. What was also evident from the experience of the Programme was the preference among teachers for a collective approach to the collective process of research in the acquisition and development of research skills. This ‘research learning’ operated through teams of teachers, groups of teachers and students, and teams of teachers and HE-based researchers. So it was that in the latter stages of the Programme ‘new researchers’ (including teachers) came to value increasingly complex models of research and more extended forms of inquiry, and, in one consortium, gained a great deal of confidence in conducting nomothetic forms of research— quite the opposite, in a sense, from where they began (Elliott, 2001b). This, and the foregoing discussion, would appear to add to the difficulties for conventional conceptions of evidence-based practice that restrict what is considered as evidence to ‘systematic investigation towards increasing the sum of knowledge’ (Davies et al., 2000, p. 3) or which propose unproblematic knowledge-transfer across contexts (Davies, 1999).12 This paper offers what we hope are useful pointers for the promotion of teacher research that can lead to improvement in pedagogic practice. As Wiliam (2002) argues, what is seen as the apparent failure of established educational research to impact significantly on pedagogic practice stems from a ‘failure to understand the nature of [practitioner] expertise in teaching’. Traditional models of knowledge transfer, suggests Wiliam, can only be effective for those practitioners at a relatively limited level of competence. Instead, the evidence from the evaluation on which this paper is based suggests that teachers can take ownership of the process of cumulative knowledge generation and be free to validate new knowledge according to contextual as well as generic criteria. We characterise this process as involving what we call ‘situated generalisation’. Our contention is that this concept contributes to our understanding of how teachers generate, validate and use research knowledge to improve professional practice.

ACKNOWLEDGEMENTS We wish to thank Wan Ching Yee for her substantial contribution to the evaluation project that underpins this paper and all the people working in schools, universities, LEAs and the TTA who were generous with their time and attention during the evaluation.

From evidence-based practice to practice-based evidence 361

NOTES 1 2 3 4 5

6

7

8 9

10 11

12

Authors’ names are listed in reverse alphabetical order. The authors of this paper represent the majority of the evaluation team. Contact: Cherry White, Teacher Training Agency, London. Specification for an Evaluation of the four TTA funded School-Based Research Consortia and the Initiative as a whole. TTA, September 1999, p 1. In this sense the Programme might be said to instance the ‘restorationism’ that it has been suggested also underpinned the UK National Educational Research Forum’s (2000) consultation document Research and Development in Education (see Hamilton, 2002, p. 149). ‘Restorationism’ refers to the belief ‘. . . that the deficiencies of science can be countered by more rigorous procedures (technology can be perfectible)’ (Hamilton, 2002, p. 148). It also refers to the desire for an evidence-based society, especially through the systematic meta-analysis of previous research evidence. Not all research is eligible in the process. In an ideal-typical version of the approach, what counts as evidence is conceived as a matter of method (‘hard’ rather than ‘soft’ is a common characterisation), and matters to do with theory or values are secondary considerations. There is an imperative to identify existing research that can be trusted to be sufficiently robust, and overall, this means the utilisation of randomised controlled trials and conventions of statistical inference. With the right evidence, so it follows, teachers can apply findings directly to the classroom and improvements in teaching and higher standards of achievement will follow. This term became prominent in 1999. It featured strongly in a British Educational Research Association conference paper given by a Senior Education Adviser at the DfEE (Sebba, 1999) and in a key journal article (Hargreaves, 1999). There is an important irony here. Ideas about what constitutes good social science practice are not, for the vast majority of people, based on any evidence. They are rather part of doxa, the everyday cultural assumptions that are so ‘obvious’ that they ‘go without saying’ (see, for example, Bourdieu, 1998). Arguably, the ‘evidence-based movement’ has appealed with great success to common-sense notions of scientific rigour. After all, who would argue against the use of evidence in making any judgement? Who in their right mind would defend a review that was unsystematic? See Hammersley, 2001b for a discussion of this point. Standard Attainment Tasks are National Curriculum Assessments. They occur at the end of each of the first three key stages (ages 7, 11, 14). This point is supported by McNamara (2002), who reports that ‘Discussions with teachers about engaging with insights from the wider evidence base in their own classroom practice often revisited the problems of contextualisation’ (McNamara, 2002, p. 25). See Hamilton (2002, p. 151) for a brief summary of Campbell’s ‘classic recanting’ put alongside other examples. It must be noted in passing that the move to evidence-based practice implies for many the adoption of a ‘medical model’ of research and meta-analysis that appears to minimise risk and to provide the basis for safe and maximally effective treatment. This adoption is taken to be self-evidently a good thing. Yet for all its many benefits, Western medicine would also appear beset by problems that are often to do with the adequacy, communication or knowledge of evidence. The real problem is that reality is ‘messier’ than the model supposes. For example, Wood et al., (1998) showed that health professionals do not simply apply scientific research but collaborate in discussions and engage in work practices which actively interpret local validity and value. There are ‘situated knowledges’, or shared local ways of thinking and acting that frame the meaning of evidence. Hammersley (2001a, p. 1), a critic of teacher research, describes how such systematic investigation is positioned. It is ‘held to contrast with evidence from professional experience, which is portrayed as unsystematic—reflecting the particular cases with which a practitioner has happened to come into contact—and as lacking in rigour—in that it is not built up in an explicit, methodical way but rather through an at least partially unreflective process of sedimentation’.

362 Research Papers in Education Volume 18 Number 4

REFERENCES ALTRICHTER, H., POSCH, P. and SOMEKH, B. (1993). Teachers Investigating Their Work: an Introduction to the Methods of Action Research. London: Routledge. BOURDIEU, P. (1998). Practical Reason—on the theory of action. Cambridge: Polity Press. CARRAHER, D., NEMIROVSKY, R. and SCHLIEMANN, A. (1995). ‘Situated Generalization’. In: L. MEIRA and D. CARRAHER (Eds) Proceedings of the 19th Conference of the International Group for the Psychology of Mathematics Education. Recife: Brazil. Vol I, p. 265. CORDINGLEY, P., BAUMFIELD, V., BUTTERWORTH, M., MCNAMARA, O., and ELKINS, T. (2002). ‘Lessons from the School-Based Research Consortia’. Paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, September 2002. DAVIES, H. T. O., NUTLEY, S. M. and SMITH, P. C. (Eds) (2000). What Works? Evidence-based policy and practice in the public services. Bristol: Policy Press. DAVIES, P. (1999). ‘What is evidence-based education?’ British Journal of Educational Studies, 47, 108–121. EDWARDS, A. (1999). ‘Shifting Relationships with Research in a School-University Research Partnership: a socio-cultural analysis’. Paper presented at the Annual Conference of the British Educational Research Association, University of Sussex, September 1999. ELLIOTT, J. (1990). ‘Teachers as Researchers: Implications for Supervision and for Teacher Education’, Teaching and Teacher Education, 6, 1–26. ELLIOTT, J. (1991). Action Research for Educational Change. Milton Keynes: Open University Press. ELLIOTT, J. (2001a). ‘Making Evidence-based Practice Educational’, British Educational Research Journal, 27, 555–574. ELLIOTT, J. (2001b). ‘Key transitional moments in the evolution of a collaborative research’ Paper presented at the Annual Conference of the British Educational Research Association, University of Leeds, England, September 2001. EVANS, J and BENEFIELD, P. (2001). ‘Systematic Reviews of Educational Research: does the medical model fit?’, British Educational Research Journal, 27, 527–541. GORARD, S. (2001). ‘A changing climate for educational research? The role of research capacity building’. Paper presented at the Annual Conference of the British Educational Research Association, University of Leeds, England, September 2001. FOSTER P. (1999). ‘Never mind the quality, feel the impact: a methodological assessment of teacher research sponsored by the TTA’, British Journal of Educational Studies, 47, 380–398. GREENO, J. G., SMITH, D. R., and MOORE, J. L. (1993). ‘Transfer of Situated Learning’. In: D. K. DETTERMAN and R. J. STERNBERG (Eds) Transfer on Trial: Intelligence, cognition, and instruction. Norwood, NJ: Ablex. pp. 99–167. HAMILTON, D. (2002). ‘Noisy, Fallible and Biased Though it Be (On the Vagaries of Educational Research)’, British Journal of Educational Studies, 50, 144–164. HAMMERSLEY, M. (1997). ‘Educational Research and the teacher: a response to David Hargreave’s TTA lecture’, British Educational Research Journal, 23, 141–161. HAMMERSLEY, M. (2001a). ‘Some Questions about Evidence-based Practice in Education’ Paper presented at the Annual Conference of the British Educational Research Association, University of Leeds, England, September 2001. HAMMERSLEY, M. (2001b). ‘On “Systematic” Reviews of Research Literatures: a “narrative” response to Evans and Benefield’, British Educational Research Journal, 27, 543–554. HARGREAVES, D. H. (1996). Teaching as a research-based profession: possibilities and prospects. (Teacher Training Agency Lecture 1996) London: TTA. HARGREAVES, D. H. (1999). ‘Revitalizing educational research: lessons from the past and proposals for the future’, Cambridge Journal of Education, 29, 239–249. HILLAGE, J., PEARSON, R., ANDERSON, A. and TAMKIN, P. (1998). Excellence in Research on School. London: Department for Education and Employment.

From evidence-based practice to practice-based evidence 363

HOLLINGSWORTH, S. and SOCKETT, H. (Eds) (1994). Teacher Research and Educational Reform. The Ninety-Third Yearbook of the National Society for the Study of Education (NSSE). Chicago, IL: The University of Chicago Press. LAGEMANN, E. C. (2000). An Elusive Science: The Troubling History of Education Research. Chicago, IL: University of Chicago Press. LAVE, J. and WENGER, E. (1991). Situated Learning: legitimate peripheral participation. Cambridge: Cambridge University Press. MCINTYRE, D. and MCINTYRE, A. (1999). Capacity for Research into Teaching and Learning: Report to the Programme. Swindon: ESRC Teaching and Learning Research Programme. MCNAMARA, O. (2002). ‘Evidence-based practice through practice-based evidence’. In: O. MCNAMARA (Ed.) Becoming an Evidence-based Practitioner: a framework for teacher-researchers. London: Routledge Falmer. NATIONAL EDUCATIONAL RESEARCH FORUM (2000). Research and Development for Education—a national strategy consultation paper. Nottingham: NERF Publications. NOSS, R., and HOYLES, C. (1996). Windows on Mathematical Meanings: Learning Cultures and Computers. London: Kluwer Academic Publishers. OAKLEY, A. (2001) ‘Making Evidence-based Practice Educational: a rejoinder to John Elliott’, British Educational Research Journal, 27, 575–576. SEBBA, J. (1999). Progress and issues in developing evidence-informed policy and practice. Paper presented at the Annual Conference of the British Educational Research Association, University of Sussex, September 1999. SIMONS, H., KUSHNER, S., JONES, K. and JAMES, D. (2002). Final Report of the Independent Evaluation of the TTA/CfBT School Based Research Consortium Initiative. University of the West of England, Bristol & Southampton University SOMEKH, B. (2000). ‘Changing Conceptions of Action Research’. In: H. ALTRICHTER and J. ELLIOTT (Eds) Images of Educational Change. Buckingham: Open University Press. STAKE, R. (1978). ‘The Case Study Method in Social Inquiry’, Educational Researcher, 7, 5–8. TTA (1996a). Teacher Training Agency’s Corporate Plan 1996/7. London: Teacher Training Agency. TTA (1996b). Teaching as a Research-Based Profession: promoting excellence in teaching. London: Teacher Training Agency. TTA (2000a). Teacher Research Grant Scheme: background and goals 1996–1997. London: Teacher Training Agency. TTA (2000b). Improving Standards: research and evidence-based practice. London: Teacher Training Agency. TTA (2000c). TTA/CfBT-funded school-based research consortia, Annual Review 1998. London: Teacher Training Agency (publication number 92/2–00). TTA (2000d). TTA/CfBT-funded school-based research consortia, Annual Review 1999. London: Teacher Training Agency (publication number 182/12–00). TOOLEY, J. and DARBY, D. (1998). Educational Research: a critique. A survey of Educational Research. London: Office for Standards in Education. WENGER, E. (1998). Communities of Practice. Cambridge: Cambridge University Press WILIAM, D. (2002). ‘Linking Research and Practice: knowledge transfer or knowledge creation?’ Plenary presentation at the annual conference of the North American Chapter of the International Group for the Psychology of Mathematics Education, Athens, Georgia, USA, October 2002. WOOD, M., FERLIE, E. and FITZGERALD, L. (1998). ‘Achieving clinical behaviour change: a case of becoming indeterminate’, Social Science and Medicine, 47, 1729–1738.

CORRESPONDENCE Professor Helen Simons, School of Education, University of Southampton, Highfield, Southampton SO17 1BJ, UK. E-mail: [email protected]

364 Research Papers in Education Volume 18 Number 4