Cambridge Journal of Education
ISSN: 0305-764X (Print) 1469-3577 (Online) Journal homepage: http://www.tandfonline.com/loi/ccje20
Datafying the teaching ‘profession’: remaking the professional teacher in the image of data Steven Lewis & Jessica Holloway To cite this article: Steven Lewis & Jessica Holloway (2018): Datafying the teaching ‘profession’: remaking the professional teacher in the image of data, Cambridge Journal of Education, DOI: 10.1080/0305764X.2018.1441373 To link to this article: https://doi.org/10.1080/0305764X.2018.1441373
Published online: 16 Mar 2018.
Submit your article to this journal
Article views: 3
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=ccje20
Cambridge Journal of Education, 2018 https://doi.org/10.1080/0305764X.2018.1441373
Datafying the teaching ‘profession’: remaking the professional teacher in the image of data Steven Lewis
and Jessica Holloway
Research for Educational Impact (REDI), Deakin University, Burwood, Australia
ABSTRACT
This paper explores how data-driven practices and logics have come to reshape the possibilities by which the teaching profession, and teaching professionals, can be known and valued. Informed by the literature and theorising around educational performativity, the constitutive power of numbers, and affective responses to data, it shows how different US educators experienced, and came to embody, new forms of numbers-based accountability. Drawing on interviews with teachers, and school- and district-level leaders, as well as relevant school-based documents, it is argued that such data are now both effective (i.e. they change ‘what counts’ within the profession) and affective (i.e. they produce new expectations for teachers to profess data-responsive dispositions over actual educative practices). This prevalence and use of data have combined not only to change teaching into a ‘data profession’, but also to change teachers into ‘professors’ of data.
ARTICLE HISTORY
Received 14 September 2017 Accepted 1 February 2018 KEYWORDS
Data; subjectivity; affect; teacher disposition; standardised assessment
Introduction This paper explores how the teaching profession, as well as the teaching professional, has come to be redefined by, as and through processes of ‘datafication’ (Lycett, 2013; MayerSchönberger & Cukier, 2013), especially via the numerical data produced by standardised modes of teacher evaluation and student assessment. Our focus emerged from two simple and yet surprisingly enigmatic questions; namely, in this contemporary moment of datafication, (1) What is the teaching profession, and (2) Who is the teaching professional? By these provocations, we seek not to provide precise renderings of what these terms should be, but rather to destabilise how data-driven practices and logics have come to reshape the possibilities for what the teaching profession and professional are and can be. These terms, profession and professional, are used in educational discourses and research with both familiarity and frequency. However, and like other similarly nebulous concepts within the field of educational sociology – for example, neoliberalism, accountability or standards – their meanings are far too often ill-defined or presumed (see, for example, Gerrard, 2015; Rowlands & Rawolle, 2013). In a similar vein, Ball notes how the terms ‘policy
CONTACT Steven Lewis
[email protected]
© 2018 University of Cambridge, Faculty of Education
2
S. LEWIS AND J. HOLLOWAY
research’ and ‘policy sociology’ are frequently undertaken with a presumptive, unspoken definition of policy, if it is defined at all: [M]ore often than not, analysts fail to define conceptually what they mean by policy. The meaning of policy is taken for granted, and theoretical and epistemological dry rot is built into the analytical structures they build…. For me, much rests on the meaning or possible meanings that we give to policy; it affects ‘how’ we research and how we interpret what we find. (1994, p. 15)
But the implications of pinning down what is meant by profession and professional – and, for that matter, policy – are substantive and extend far beyond matters of mere semantics. The emancipatory rationale undergirding policy sociology, in which taken-for-granted relations and processes of power are not merely described but also contested (Ozga, 1987), reflects a need for research to undertake a critical advocacy, and to challenge oppressive structures and practices; that is, to destabilise ‘that-which-is by making it appear as something that might not be’ (Foucault, 1988, p. 36). We thus see taken-for-granted, datafied understandings of ‘profession’ and ‘professional’ as limiting our ability to problematise these terms, and for how they might otherwise be reinscribed to achieve more socially just ends, both individually (i.e. professionals) and collectively (i.e. the profession). To this end, we examine here how enumerations of teaching function to discursively constitute the teacher as a performative subject, one whose sense of self is informed largely by what (s)he thinks data reveal about her/him. Drawing suggestively across literature and theorising around educational performativity (Ball, 2003), the constitutive power of numbers to ‘make up’ people (Desrosières, 1998; Hacking, 1986), and the affective influence that data exert on teaching subjects and education policy (Sellar, 2015; Webb & Gulson, 2012), we show how different US schools and educators experienced, and responded to, new modes of numbers-based accountability. Our analyses are informed by interviews with teachers and school leaders, as well as relevant school-level policy materials, which were collected separately across two different US-based studies into new modes of educational governance (each conducted between 2013 and 2014). One study focused specifically on modes of testing that measured school-level performance, via the new PISA-based Test for Schools (hereafter ‘PISA for Schools’) instrument of the Organisation for Economic Cooperation and Development (OECD); the other focused on techniques that measured teacher-level performance, via value-added measures (VAMs) and teacher observation rubrics. Even though the two studies were not originally conceived with such a collaboration in mind, bringing our respective datasets together – and then re-analysing them using a new theoretical lens – allows us to move beyond the apparent differences between different modes of teacher accountabilities (e.g. high versus low stakes; mandatory use versus voluntary use) in various school-specific contexts, and instead draw attention to the broader effects and conditions of possibility that can arise from their use. This enables us to speak to the pervasive logics that underpin these particular examples of enumerative accountability. While we readily acknowledge the significance of the specific contexts (i.e. the people, places, systems and norms) in which such accountabilities are deployed, we use this paper to show how seemingly different modes of accountability and data can elicit similar school and educator responses in otherwise distinct contexts. As such, we avoid making generalisable, sweeping statements about teachers and school leaders and their responses to accountability, while illustrating that a discourse of datafication is pervasive nonetheless. What we argue is that data are both effective – i.e. they effect tangible changes to what counts within the teaching profession – and affective, in so far as they produce new
CAMBRIDGE JOURNAL OF EDUCATION
3
expectations amongst teachers to openly profess data-responsive attitudes and dispositions, and to embody these data-informed renderings of self. Our analyses reveal that teachers at our schools were most valued for demonstrating a disposition favourable to data, were amenable to being represented by data and, ultimately, sought to improve data over other educative practices (e.g. pedagogy). By doing so, we show how the prevalence and use of data have not just changed teaching into a ‘data profession’, but also changed teachers into ‘professors’ of data.
Theoretical resources: governing schooling by numbers In spite of their contemporary nature, a focus on quantified understandings of teaching and teachers – epitomising what Porter (1995) describes more generally as a ‘trust in numbers’ – is the latest manifestation of processes that have long been in train. Indeed, we can see this in the very origin of statistics as ‘state numbers’ (Rose, 1999), whereby statistics and society are mutually known and brought into existence through ‘the very act of counting’ (Sætnan, Lomell, & Hammer, 2011, p. 1). This constitutive process also reflects that such numbers, far from providing an objective measure of some empirical reality, are deeply implicated in constructing the very phenomena they seek to measure (Desrosières, 1998). Such logics are also evident in processes of evidence-informed policy-making (Head, 2008), where numbers are used to provide the most putatively objective evidence, or ‘facts’ (Desrosières, 1998), to legitimise policy decisions, even when the overall accuracy of numerical data – their ‘reality’ – is often less important than their precision and comparability. This means that ‘a plausible measure backed by sufficient institutional support can nevertheless become real’ (Porter, 1995, p. 44; emphasis added), irrespective of the fact that these data might (at best) be described as ‘funny numbers’ (Porter, 2012). We see these developments paralleling the shift from earlier, more professional modes of accountability that operated during the post-war bureaucratic welfare state towards more performative modes of accountability, the latter of which emerged in concert with the ‘audit society’ and associated neoliberal forms of governance (Lingard, Sellar, & Lewis, 2017). This includes a significant increase in data collection, analysis and use in education globally, with numbers relied upon to define, measure, compare and govern the performance of students, teachers, schools and schooling systems alike (Ozga, 2016). We can consequently see a growing sense of what has been described as policy and governance by numbers (Grek, 2009; Lingard, 2011; Rose, 1999), which enable understandings of teacher performance – and especially the ‘good school’ and ‘good teacher’ (Thompson & Cook, 2014) – to be steered in certain data-centric ways. Indeed, this fetishisation of testing, numbers and data in education has expanded to such an extent as to be described by Lingard and colleagues (2013) as an overarching ‘meta-policy’, across a variety of local, national and global schooling spaces. And even while putatively ‘objective’ numerical measurements can be used to legitimise school or system-level policy decisions, they can also influence how individual teachers come to enact and value certain pedagogical practices, logics and dispositions over others (Hardy, 2015; Holloway & Brass, 2018). Despite the pervasion of numerical data, we do not suggest that the governance of teaching and teachers is determined solely by numbers. One need only consider more recent theorisations of educational governance – such as governing by examples (Simons, 2015), ‘best practices’ (Auld & Morris, 2016; Lewis, 2017), socialisation (Grek, 2017) and
4
S. LEWIS AND J. HOLLOWAY
expertise (Grek, 2013; Lingard, 2016) – to see how these shifts in governance are, in fact, far from ‘exclusively based and dependent upon the cold rationality of numbers’ (Grek, 2017, p. 295). Rather than seeing these developments as paradigmatic shifts away from numbers, we emphasise instead how these new(er), more qualitative configurations of evidence often reference, and even complement, existing numerical data; for instance, how qualitative examples of ‘what works’ are valued for their ability to ‘improve’ numerical forms of performance data (see, for instance, Lewis, 2017; Lewis & Hogan, 2016). Such developments are central then, arguably, to the transformation of the teaching profession, in which ‘re-presenting’ (Miller & Rose, 2008) oneself as an effective teacher through data has become of central importance, both for system-level accountabilities and for knowing and expressing one’s own worth as a teacher. As Miller and Rose note, any mode of governing: … depends on a particular mode of ‘representation’: the elaboration of a language for depicting the domain in question that claims both to grasp the nature of that reality represented, and literally to represent it in a form amenable to political deliberation. (2008, p. 31)
The objective quality afforded to numerical data thus bestows an objective ‘truth’ to whatever aspect of teaching or teachers happens to be under interrogation. Within such performative regimes, teachers are then impelled to actively re-present both their practice and their being through data, in order to produce ‘fabrications’ that accord with ‘particular registers of meaning … in which only certain possibilities of being have value’ (Ball, 2003, p. 225). This need to perform performance, to literally fabricate oneself via data, means that actual pedagogy and student-centred practices – what Power (1994) would describe as ‘first-order activities’ – can be subsumed by ‘second-order activities’, or the giving of auditable and measurable accounts of one’s self and practice. We also position this study and theorising in contrast to the contentious and growing body of literature concerning education accountability that often focuses on the technical aspects of such policies and practices, such as the statistical validity and/or reliability of accountability measures, the extent to which policies and practices actually do improve teacher and student performance, and so on (see, for instance, Amrein-Beardsley, 2014; Amrein-Beardsley & Holloway, 2017). These studies, within what might broadly be defined as the field of policy evaluation, often critique matters related to accountability output use (e.g. consequences and stakes attached to such data outcomes), but without also interrogating the underlying logics that enable such tests and testing practices to exist in the first place, or the discursive effects that such tests exert on teacher subjects and the teaching profession. While such debates are fruitful, we see the ‘datafying’ of teaching through such accountabilities as worthy of critique in its own right, especially in terms of how these different tests influence the way(s) in which the teaching profession is reconstituted through data. Indeed, all such accountabilities render teacher ‘quality’ as something visible, measurable and comparable, thereby remaking the ‘teacher’ into a calculable object of knowledge and limiting the conditions of possibility (Foucault, 1994) by which teaching ‘excellence’ might otherwise be conceived.
Context The United States has relied on numerical data to measure and compare school performance for several decades, but recent federal incentive programmes have placed new pressures on
CAMBRIDGE JOURNAL OF EDUCATION
5
schools to implement accountability systems that heavily rely on student test scores to evaluate teacher and school performance. Race to the Top (RttT), Teacher Incentive Fund (TIF) grants, and No Child Left Behind (NCLB) waivers, for example, have incentivised states and school districts to adopt high-stakes teacher evaluation systems that incorporate ‘multiple measures’ of teacher performance, including some measure of teacher influence on students’ annual learning growth. A majority of US states have since adopted and implemented some form of value-added model (VAM)1, a statistical tool designed to link individual teachers to their students’ growth on annual standardised achievement tests. While some scholars – primarily coming from economics – have promoted the use of VAMs, most educational researchers have heeded caution due to the instruments having significant issues related to validity, reliability, bias and fairness (for a full review, see Amrein-Beardsley & Holloway, 2017). In addition to VAMs, ‘multiple measures’ also commonly include teacher observation rubrics that are used to assess teacher performance during a combination of announced and unannounced classroom observations. These rubrics assess various indicators of ‘quality’ that are used to numerically rate a teacher’s level of proficiency, and a teacher’s final performance evaluation is typically derived from a combination of her/his rubric and VAM scores. While all US public schools are mandated to participate in some form of annual standardised achievement tests (as per NCLB legislation or the more recent Every Student Succeeds Act), some schools in the United States also choose to participate in other, more voluntary assessments. One of these emergent accountability tools is the Organisation for Economic Cooperation and Development’s (OECD) PISA for Schools, a voluntary school-based version of the more renowned Programme for International Student Assessment (PISA). PISA for Schools measures the performance of 15-year-old students in reading, mathematics and science, and compares these school-level data against the performance of schooling systems as determined by ‘main PISA’. In this way, schools can be known, compared against and even notionally learn from ‘high performing’ schooling systems (e.g. Shanghai, China; Finland), despite the problems arguably inherent in this process (see Lewis, 2017). Even though the sample-based nature of PISA for Schools does not directly link individual teacher practice with student outcomes, these school-level measurements of student performance still become, in effect, proxy markers for teacher effectiveness. Importantly, this new ability to compare across local, subnational and national schooling spaces strengthens how PISAbased metrics and data are fast becoming the undisputed lingua franca when reporting teaching, and thus teacher, effectiveness.
Methods This paper involves two distinct, though interrelated, parts. First, it brings together a group of accountability tools used in the United States to assess teacher and school performance (VAMS, teacher observation rubrics and PISA for Schools), and it includes interviews with a variety of school and district-level actors across four US states who engaged with these instruments in various capacities (school teachers and principals; district assistant superintendents and superintendents). Perhaps what makes this research most unique is its integration of two distinct studies that were conceived and executed at different times and in different locations, albeit both focusing on US schools and districts. This collaboration emerged through the recognition that, despite the different contexts and accountability tools present in the separate studies, we noticed strikingly similar dispositive responses to
6
S. LEWIS AND J. HOLLOWAY
data from the respective school and district-level participants. Regardless of the specific testing instruments and/or accountability contexts that were the foci of the original studies, we use this paper to demonstrate the ways in which a ‘data-driven’ discourse is helping to constitute a new teaching profession and professional. To best describe our methodological approach, it might be helpful to first describe what it is not. For one, it is not a comparative study in the traditional sense, in so far as it does not attempt to draw comparisons or contrasts between two studies that were admittedly conceived at different times, under different circumstances and with different intentions. Rather, after ongoing discussions about the two studies, we came to see the potential value in re-focusing our attention away from that which made the studies different (e.g. geographic context, participant profile, etc.) and towards that which made them similar – that is, the logics, technologies and discourses that provided the conditions for the datafication of teaching and the teaching profession. With this in mind, we brought our datasets together and integrated the interview responses into a single corpus, thereby enabling us to see across individual contexts. Specifically, we shifted the contextual landscape from solely focusing on ‘traditional’ comparative factors, such as geographic location, state-based legislative frameworks or local socio-economic contexts, to attend to more discursive contextual factors, such as the logics, rationalities and technologies that have come to define teaching and teacher effectiveness via numbers and data. While we acknowledge the potential dangers associated with trying to make sense of policy and its effects free of context, we want to stress that this analysis is about re-contextualising these data to position them within a larger context, one that is shaped by far-reaching and circulating discourses related to testing and data. This enabled us to focus less on the participants’ local reactions to specific tests and/or testing mechanisms, and to focus more on the ways in which testing logics and a testing culture can produce new educator subjectivities that change what it means to be part of a profession or what it means to be a professional. That being said, we do not suggest these findings should be taken to provide generalisable statements about all teachers or school leaders, as we do not view any knowledge as neutral or apolitical (Foucault, 1980). Rather, we use this analysis – guided by our theoretical and epistemological assumptions and interpretations – as illustrative evidence of a shift in teacher subjectivity relative to a testing culture that has proliferated globally (Holloway, Sørensen, & Verger, 2017; Smith, 2016). Each of our data sets consisted of two main sources: semi-structured interviews with teachers, teacher leaders/evaluators, school principals and school district-level leaders (assistant superintendents and superintendents); and document analysis. In total, 26 participant interviews were included in the corpus, including interviews with nine teachers, 11 school principals, and six district-level leaders. Each of these interviews was approximately one hour in duration and all were transcribed remotely after the interview had been conducted. Supplementing these interview data, we also analysed documents relating to the respective accountability tools (VAMs, rubrics and PISA for Schools), including the instruments themselves, supporting technical and administrative documentation, and school and district-level reports. Given these aggregated data, our analysis consists of two parts, one related to the effects of data and one related to the affects of data. For the effects, we were primarily concerned with the ways in which numerical data were produced and used to measure teacher performance via different accountability tools. To this end, we listed the types of data produced
CAMBRIDGE JOURNAL OF EDUCATION
7
by VAMs, rubrics and PISA for Schools, while also tracking the ways these data were used to enable the calculation of teacher and school performance. This enabled us to think about the specific effects that were made possible by these tools and the data associated with the quantification of teacher and school performance. For the affects of data, we primarily sought to understand the ways in which these data and data practices have been taken up by teachers, as well as by school and district leaders, via semi-structured interviews. Once these interview data were compiled into a single document, we conducted multiple read-throughs of the transcripts, collecting analytic memos (Saldaña, 2013) regarding instances where data (or references to data) were used to (1) construct knowledge about teachers or teaching, or (2) to constitute a teacher’s value, expected disposition or attitude. After comparing memos, we extracted these segments and conducted subsequent rounds of analysis, using our theoretical framework to analytically track the ways that data shaped the participants’ affective responses and subjectivities (see Sellar, 2015; Webb & Gulson, 2012).
A tale of two (teaching) professions: the effects and affects of data Being effect-ive: a data profession As noted earlier, a central feature of many contemporary schooling systems is the prevalence of numerical data, and their overwhelming use in determining student, teacher, school and system effectiveness. Although many systems globally have since adopted an ‘AngloAmerican’ top-down, test-based approach to educational accountability and governance (see Lingard & Lewis, 2016), here we examine several accountability tools that have been deployed within the United States; namely (a) VAMs, (b) observation rubrics and (c) PISA for Schools. Specifically, we are interested in the types of data that these tools produce and the governance practices that these data enable for the teaching profession. Or, in other words, we seek to discern the tangible effects of how these tools and data change the possibilities for what we think, and even can think, about the teaching profession. Importantly, the effects that we are interested in are not limited to any particular schooling site or context, and hence the data we draw upon here are the accountability tools themselves, rather than the individual or collective responses of educators and schooling systems. Our focus therefore is what these enumerative instruments actually do to, and make possible for, the teaching profession in a broad sense, before next attending to the subjective and affective responses of individual teachers and school leaders. What is most apparent upon examining these three accountability tools (see Table 1) is that, irrespective of the precise nature of the instrument or the data they produce, they each exhibit similar internal logics and enable remarkably common effects. For instance, all three generate solely numerical data to assign putative levels of ‘effectiveness’ value to performance. This can be the explicit measuring of the ‘value’ added by teachers to student performance via VAMs (i.e. how much has an individual teacher contributed to the growth of student performance in their class(es)?); the quantification of certain teacher classroom practices via observation rubrics (i.e. to what degree does a teacher demonstrate ‘effective’ classroom practices?); or the comparison of school-level performance on PISA for Schools against that of ‘high-performing’ international schooling systems (i.e. are we better than/ worse than Finland or Shanghai, China?). Over and above the similar numerical nature of
8
S. LEWIS AND J. HOLLOWAY
Table 1. Different schooling accountability tools, their data and potential effects. Tool VAM
Numerical data produced Teacher-level measure of ‘value added’ to student test scores in reading and mathematics School-level measure of ‘value added’ to student test score in reading and mathematics
What these data enable Teacher-level comparisons School-level comparisons Teacher evaluation calculation Teacher performance-based pay calculations Teacher retention, termination and tenure decisions ‘High stakes’ for teachers
Teacher Observation Rubrics
Teacher-level measure of their classroom performance associated with a variety of instructional indicators (e.g. alignment of standards and objectives with instruction; lesson structure and pacing)
Teacher-level comparisons School-level comparisons Teacher evaluation calculation Teacher performance-based pay calculations Teacher retention, termination and tenure decisions Teacher professional development goals and objectives ‘High stakes’ for teachers
Teacher-level measure of professional performance associated with professional responsibilities indicators (e.g. participation in professional development; contribution to overall school improvement) Teacher-level measure of teacher’s ability to self-assess (e.g. the teacher regularly self reflects; the teacher can effectively assess her/his lesson effectiveness) Teacher proficiency level (levels 1–5; unsatisfactory to exemplary) PISA for Schools
School-level measure of student performance in reading, maths and science School-level measure of school and student contextual data (e.g. student attitudes to reading; disciplinary climate in classrooms; student–teacher relationships)
School-level comparisons with national and international schooling systems Teacher professional development goals and objectives School reform goals and objectives ‘Low stakes’ for teachers
outputs, we would argue that such data enable a whole slew of practices that reinforce how teacher effectiveness should be demonstrated, first and foremost, by improving data – i.e. the second-order activities (Power, 1994) – rather than by focusing on the practices, pedagogies and policies (i.e. the first-order activities) that most influence these data. Within such tools and logics, effectiveness can only be known, measured and compared by numerical data, or ‘the data that counts’ (Hardy, 2013), and thus the teaching profession as a whole is compelled to produce data that can best demonstrate this effectiveness. However, the very nature of these accountability tools means that only certain kinds of data can be produced, with the data deemed to ‘count most’ being determined well in advance of the tools themselves even being deployed. For instance, Table 1 shows that these include such presaged signifiers of teacher quality as the frequency of self-reflection (observation rubrics) or the amount of student interest in reading (PISA for Schools), with the teaching profession having little to no influence over which of these metrics should be included or valued. We should emphasise, however, that our purpose here is not to problematise the specific metrics of teachers or teaching that have been chosen to represent effectiveness in any of the accountability tools examined, or to suggest that different teacher/teaching metrics would somehow alter how the profession can be known and governed through
CAMBRIDGE JOURNAL OF EDUCATION
9
data. Rather, we draw attention to the fact that any such tool of quantification, and thus any such set of quantities, necessarily simplifies a complex (and contingent) social process for purposes of commensuration, comparison and, ultimately, evaluation. In this we can see how standardised forms of schooling accountability help to convert otherwise ‘complex social processes and events into simple figures or categories of judgement’ (Ball, 2003, p. 217), meaning that only certain forms of effectiveness ‘evidence’ are valued while, at the same time, others are marginalised or excluded. Regardless of which of the above accountability tools and metrics are employed, all three exemplify how datafication processes ‘unavoidably omit many features of the world … [and] channel users towards some kinds of inferences and/or actions more readily than others’ (Lycett, 2013, p. 384). Any apparent differences between the accountability tools (e.g. teacher-level measures versus school-level measures; high-stakes versus low-stakes; voluntary versus mandated) are elided by what the data make possible for how schooling can be understood and practised. In short, the implications for the teaching profession are the increased use of and reliance upon data to know itself and for it to be known by others. Data then become the central focus, rather than whether such data produce a positive effect on student learning. We would also draw attention to these judgements regarding the effectiveness of the teaching profession being determined, somewhat ironically, outside of the teaching profession, be it by commercial analytics companies (e.g. SAS Institute Inc.) and test providers (e.g. Pearson), or intergovernmental organisations (e.g. the OECD). This arguably changes the role of the traditional teaching profession and its members, because these developments also move knowledge, expertise and authority around ‘what counts’ outside of the profession, to third-party service providers and analysers of data. In this we can see the overwhelming focus on data: on the production of data, the analysis of data, on data-driven policy-making and reform, and (perhaps most importantly) on the teaching profession being known and steered by data. Presumably, any difference of opinion between the opinions of the teaching profession and the data will be resolved in the favour of data, especially when such data possess the objective impartiality and certainty of numbers. Being affect-ive: a profession of data Given the prevalence and importance of accountability tools to measure and know the teaching profession more broadly, it is perhaps unsurprising that such performative pressures can also exert a considerable influence on individual teaching professionals themselves. This is not only in terms of data steering their practice but also their portrayal of themselves as professionals, thereby changing not only what teachers do but also, ultimately, who teachers are (see Ball, 2003). Our purpose here is to explore the affective effects of these datafication techniques on how teachers understand their ‘effectiveness’, and how these produce new expectations for teachers to openly profess data-responsive attitudes and dispositions, or what Sellar (2015) describes as ‘a feel for numbers’. Drawing across the various teacher responses to VAMs, observation rubrics and PISA for Schools from our collective data-set, we focus here on three key themes that emerged from our analyses, in the context of theorising around broader processes of governing by numbers and the quantification of schooling. (i) Data are necessary to validate teaching The participants frequently challenged the possibility of ‘knowing’ a teacher or school’s degree of effectiveness without numerical data, which positioned the accountability tools as
10
S. LEWIS AND J. HOLLOWAY
necessary to make value judgements. These data-responsive logics were present irrespective of whether the resulting data were to be used to inform high(er)-stakes practices (e.g. performance-based pay, retention/termination), or low(er)-stakes governance practices (e.g. professional learning, school-based reform). Indeed, there was a sense amongst the teachers and school leaders that quantifying performance was not only desirable but necessary to discover the ‘truth’ about themselves and their schools: I think you have to quantify the evaluation process somehow. And, like, I’m a big fan…. I think that assigning it [teaching practice] numbers is the best way that we have to do things right now. (Teacher, Arizona; emphasis added) One of the things I really liked about the OECD Test [for Schools] is that it was giving us an opportunity to benchmark our school with national schools, as well as global schools…. [W]e’re a world-class education system, we’re providing a world-class education, and I want[ed] to know if that was true. (Assistant Superintendent, Virginia; emphasis added)
Here we can see the constitutive power of numbers, be they high- or low-stakes, in which ‘the very act of counting’ (Sætnan et al., 2011) brings quantitative confirmation to otherwise indiscernible, subjective qualities, be they ‘teaching practice’ or the ‘world-class’ status of a school district. Importantly, this clarity is not only about verifying the pre-data claims of a school and its educators (‘I want[ed] to know if that was true’). It also prioritises a ‘trust in numbers’ (Porter, 1995), whereby teachers express their faith in the power of numerical data to fully capture and represent their performance (‘you have to quantify’; ‘the best way’). Despite their apparent faith in data for their ability to validate performance, the ever-present need for teachers to deliver ‘productive’ data meant that such accountability tools were far from a wholly benign instrument of evaluation. Reflecting Porter’s (2012) notion of ‘funny numbers’, there was the sense that numerical VAM data – that is, the measure of a teacher’s effects on standardised student achievement scores – were institutionalised as the ultimate way to give validation to teacher quality, even when there was considerable reason to question the validity of the VAM data themselves. In fact, many teachers acknowledged how the underlying need to quantify performance could induce an over-reliance upon numbers that were often ‘unfair’ and ‘inaccurate’, but that this data focus was still inescapable (‘there’s no other way’): You have to measure performance somehow, and that’s a standardised test, and teachers’ job performance has to somehow be tied to that…. Whether it’s fair? Or totally accurate or perfect? I would say maybe not, but there’s no other [way]…. It may be unfair, it may be inaccurate at times, but all you have is what you produce, and you have to produce the best product that you can. (Teacher, Arizona)
This suggests what we see as a tension, or ‘doublethink’ (Hardy & Lewis, 2017), amongst educators towards data, in which data are simultaneously valued and critiqued, problematised and accepted. However, and even while these data, and the ensuing focus on improving data, were called into question by some teachers, such contestations frequently paled in comparison with the way that numbers could ‘speak’ through them. Despite the questionable fairness or accuracy of numbers, this need for teachers to perform and ‘produce the best product [i.e. data] that you can’ extended far beyond simply producing favourable re-presentations (Miller & Rose, 2008) of their teaching practice. Even while teachers challenged the limitations of data, there was evidence of their being (re)constituted by data, in which VAM scores not only represented their ability to teach but also, significantly, their worth and value – their very being – as a teacher:
CAMBRIDGE JOURNAL OF EDUCATION
11
I think the value-added [scores] gives you validation. If you’re somebody who has shined in [your observation] evaluation scores, but kids aren’t learning in your classroom, it’s pretty obvious you’re a phony. (Teacher, Arizona; emphasis added)
In these comments, the discursive power of numbers has taken precedence over other possible renderings of professional performance or professional judgement, especially in terms of constructing that which they seek to confirm (‘gives you validation’) and reifying the importance ascribed to data-informed conceptions of teacher subjectivity (‘you’re a phony’). In this we can see how educators’ knowledge of teacher effectiveness is removed from their personal expertise and authority, meaning that one can only know the extent to which a teacher is effective by looking at the numerical data. We also draw attention to the contestation noted between different sets of numerical data: here, VAM scores and observation rubric scores. While this reflects Ball’s (2003) terrors of performativity and the need for teachers to produce positive representations of their performance within specified metrics, the lack of alignment between two numerical measures of teacher effectiveness also calls into question the supposed objectivity of numbers, despite both sets of data being numerical and thus ‘true’. Even so, such contestation between data did little to undermine the perceived value of numbers to give validation to teacher practice, or for their ability to speak the truth about the worthiness of individual teacher subjects. (ii) A data-responsive disposition is necessary for teachers to be seen as effective Given that numerical data are required to validate teacher value and the associated professional judgements, ‘truthful’ accounts of performative worth were present alongside teachers’ portrayals of data-driven dispositions, marked by an openness to profess themselves as, through and by data. Many teachers framed data-responsiveness as a key indicator of a ‘good teacher’, while rarely mentioning more traditional indicators, such as effective pedagogical or instructional practices. Perhaps most tellingly, the ‘good teacher’ was understood as someone who professes a ‘give-a-shit attitude’ (Evaluator, Arizona); that is, one who embraces a data-responsive disposition, who ‘wasn’t scared’ to be judged through data, and who is willing to constantly work on themselves as informed by that data. This focus on presenting oneself as data-driven was evident as teachers acknowledged the value given to disposition over practice, whereby teacher effectiveness was considered solely through the lens of data; in turn, improvements to performance were only valued inasmuch as they could help make tangible improvements to these data. As one teacher noted, the foremost concern in terms of demonstrating effectiveness was being able to ‘shift the numbers’, even as this caused anxiety around whether such shifts were likely, or even possible: It’s about what I can grow in as a teacher, [and] how I can be more effective. Because, I mean, if you can’t shift the numbers, which history has shown us that you can’t, then what do I do then? (Teacher, Arizona)
Such responses framed teachers as avid producers and consumers of data, with personal and professional growth inescapably linked to improvements in the data generated by various accountability tools, meaning that a data-responsive disposition was necessary to both know oneself through numerical data and to see improving those data as a central motivation. Moreover, this trust in numbers could arguably only exist in a context where numerical data are valued above all other possible ways of knowing teachers and teaching, even though
12
S. LEWIS AND J. HOLLOWAY
(as noted previously) these same teachers could question such data for their lack of educative benefits and the stress that was often induced. This partiality was arguably reflected in teachers not only responding to data but also to the instruments that generated these data. In other words, observation rubrics, for instance, became the consummate authority on teaching, which had the effect of marginalising the professional judgement of teachers themselves: What I’ve learned is, I need to look at the rubric. Is that making [me] a more effective teacher? ... I think it does. I think it’s from a place, like a data-centred thing, and like research-based stuff. And if you do check those things off [the rubric], I think that you do get better. (Teacher, Arizona; emphasis added) I like that [the rubric is] all spelled out, like, here’s what you need to do to get this number, here’s what you need to do to get this number. So, I like that ability to go back to look at it and find out where I could go. (Teacher, Arizona; emphasis added)
Given the importance of projecting an image of success, and the conflation of data outcomes with achievement, it is perhaps unsurprising that teachers sought the expertise contained within the very instruments of their judgement. Again, this underscores the apparent need to use observation rubrics, and similar forms of evidence, to inform their practice (‘I need to look at the rubric’), and that this willingness to engage with the instrument and data in itself made one a more effective teacher (‘you do get better’; ‘get this number’; ‘I think it does’). In this we can see how teachers have embodied a data discourse that assumes numbers produce the only knowledge ‘that counts’ (Hardy, 2013), and that being favourably disposed towards using and believing in these numbers is how one demonstrates quality. This is not to say that teachers should be discouraged from critically reflecting on their practice or even, for that matter, from using data to help inform that reflection. However, we would contest that focusing only on certain kinds of data, which measure only narrowly defined parameters of teachers and teaching, is problematic, especially when these data come to supersede the professional training, knowledge and judgement of teachers (or other professional educators, such as teacher evaluators) themselves. It should also be noted that this desire for teachers to align their effectiveness and sense of self with performance data was consistent with the expectations of the school and district-level leadership teams as well. As one district administrator noted, being ‘data-driven’ was the central rationale of their entire organisation: We have a very data-driven superintendent. Consequently, a very data-driven senior administration, and we have a very data-driven school board. We have a nine-member board that is elected that oversees the administration of our district; they are very data-driven. So, everybody is wanting to know, ‘How are our kids performing? What are we doing? How are we performing in comparison to other districts in the state, in the country, internationally?’ So yes, we are an extremely data-driven organisation. (Assistant Superintendent, Texas)
Such comments emphasise how data have become indispensable to the governance of schooling via the construction of data-responsive teachers and school leaders. Not only does the use of data provide the means to objectively ‘know’ teacher and school-level performance, but they also serve to distinguish schools and districts (and educators) that are willing to ‘speak the truth’ (through data) from those that are not. This has the effect of repositioning data, and a data-responsive disposition, as central to how teaching professionals and their practice are understood and known.
CAMBRIDGE JOURNAL OF EDUCATION
13
(iii) Teachers are capable of improvement but never perfection At the same time as data were deemed necessary to validate teaching, and a data-driven disposition was seen as the most valued attribute of teaching professionals, these logics were present with a seemingly paradoxical concern; namely, that teachers are capable of improving themselves and their practices through data, but this improvement will never lead to perfection. This Sisyphean state of perpetual imperfection reflected the sense that teachers and administrators must constantly demonstrate their acceptance that there is always growth to be made, in so far as there is always a competitor (teacher, school, system) who might perform better than you. Hence, teachers must remain competitive and constantly strive to achieve better possible versions of themselves. Indeed, the relative nature of these data, in which present performance can be compared against past performance, and the performance of one teacher or school can be compared against another, facilitates the belief that there is always room to improve, provided that such improvement can be captured by data. Coupled with this data-driven disposition, teachers and administrators openly professed a willingness to work on themselves to improve their performance, accepting the necessary presupposition there was always some deficiency to address: In the United States, it’s certainly very much a political issue, and whether schools are good or bad has political weight. And so instead of, ‘What can we do better’, there’s an awful lot of time spent on, ‘Are we really bad or are we really good?’. And in a sense, I don’t really care – we’re not as good as we could be. (Superintendent, New York; emphasis added)
This sense that there is always room for improvement, irrespective of how well one performed (‘we’re not as good as we could be’), was a common refrain amongst teachers and leaders, especially since this improvement could always be readily quantified and compared against past performance via the accountability tools employed. Here we can see how the relative, rather than absolute, nature of the performance data evoked a sense of ‘governance by comparison’ (Nóvoa & Yariv-Mashal, 2003), whereby individual teachers sought ways to constantly improve their own data, even if these improvements could only be benchmarked against their own past performance: You have a bunch of people who have high expectations for themselves and they’re perfectionists…. They could get all 5s [‘excellent’ on a scale of 1 to 5] and one 3 [‘average’] and be devastated…. And even if you gave somebody all 5s – and we have some teachers that are close to that. They’re phenomenal, they’re just that good. But even if you gave them all 5s, that would not be good enough because they still, in their hearts, know of something that they could have done better. But that’s the type of people that you want. (Teacher, Arizona)
What is perhaps most significant from the comments above is that teachers perceived themselves as flawed and capable of improvement, but this striving to improve was accepted as a never-ending proposition. Even teachers deemed to be high performing by the parameters of whatever accountability tools were at hand (‘get all 5s’; ‘they’re phenomenal’) were seemingly dissatisfied. This was not so much on account of criticism from the school leaders, but instead on account of individual teachers who were, in effect, their own harshest critics (‘they [knew they] could have done better’). However, this inclination towards constant self-critique and self-improvement was seen as a positive attribute by school leaders (‘that’s the type of people that you want’), meaning that the impetus to constantly improve data was a central consideration for teachers striving to be seen as ‘effective’. In this way, teaching professionals were constituted as needing to profess themselves as, by and through data. Even while data and data-responsiveness were positioned as central
14
S. LEWIS AND J. HOLLOWAY
to understanding teacher effectiveness, improving these data (and thus oneself) was seen as an inexorable concern, which required teachers to accept they will never be perfect, in spite of how much they improve. There is also a clear recursive relationship here, with the need to constantly improve performance data driving an ever-more intense engagement with, and response to, enumerations of teachers and teaching, ad infinitum. Datafication has thus created a situation where teachers can only know themselves and their practice as data, and these data will, in turn, tell them what and how they need to improve – in short, where data, and an inclination to use data, provide teaching professionals with their diagnosis and prescription.
Conclusion Our research demonstrates how data-driven practices and logics have come to reshape the possibilities by which the teaching profession, and teaching professionals, can be known and valued, and the ways that teachers can ultimately be and associate themselves in relation to their work. Drawing on the experiences of US teachers, and school and district leaders, we have shown how the underlying logics, rationalities and technologies of enumerative accountability tools help to reconstitute the teaching profession by redefining what counts and how it is counted. This in turn reshapes the teaching professional, in the sense that teachers are valued most for openly professing a data-responsive disposition and for their ability to embody these data-informed renderings of self, over and above other more educative and pedagogical practices. Even though these two studies were initially conducted independently, and thus necessarily reflect different local conditions and accountability regimes, drawing our research together and re-analysing it through a new theoretical lens reveals the frequent dispositive similarities amongst teaching professionals when they are compelled to re-present their effectiveness, and themselves, as, by and through data. This double articulation, with data both effective and affective, reflects how teaching has become, in the United States specifically but also more broadly, a data profession – that is, a profession in which data are central and where teachers are required to constantly profess themselves as data. Here we can see how the collection, analysis and comparison of performance data is the central consideration, regardless of whether such metrics are legislatively mandated (such as VAMs or teacher observation rubrics) or voluntarily implemented (PISA for Schools), or whether they are to inform high-stakes decisions around teacher tenure and performance pay (VAMs, rubrics) or low-stakes decisions around local professional development (PISA for Schools). Irrespective of the accountability tool in question, our research challenges the commonly accepted types of data that are produced through enumerative datafication techniques (i.e. numbers, school/teacher-level measurements), as well as the similar types of decisions and practices that these data enable (i.e. comparisons between teachers/schools; improvement of data as the overriding consideration). Moreover, these processes and logics – which we describe as the effects of data – exert considerable influence on how individual teacher subjectivity is itself constituted, and how a data-driven teacher disposition is construed as being of most value. These dispositive responses – what we term the affects of data – embody how teaching professionals are compelled to see data as the only way to validate their teaching, and thus it is necessary for such professionals to be fully responsive to data in order to know and profess one’s worth
CAMBRIDGE JOURNAL OF EDUCATION
15
as a teacher, transforming teaching professionals into professors of data. However, even as improving oneself through improving one’s data becomes a central motivation, these same teachers were acutely aware that while improvement was possible (and even obligatory), perfection was always out of reach. Rather than diminishing their seeking of perfection, this had the somewhat ironic effect of redoubling their focus on data and improving their data, manifesting in an ongoing personal struggle to constantly seek perfection – that is, ‘to get all 5s’ – and yet forever be unsatisfied. While we eschew any notion of seeking to generalise across all contexts and cases, it is significant to note that these dispositions were evident even in the presence of diverse accountability tools and regimes, and even across a variety of school- and teacher-level contexts. We thus see such affectations to animate the logics of datafication (i.e. validation through numbers, constant measurement, impelled to improve data), rather than the instrument of accountability being employed. In terms of teacher subjectivity, it thus matters less how these numerical data are produced, and more that they are produced and valued, institutionally and individually. To return to the provocations that began our argument, we see defining the teaching profession and the teaching professional as a thoroughly philosophical and social endeavour, a part of broader answers to complex questions around the very purposes of education and schooling – that is, for whom (e.g. the individual, the employer, the nation-state) and to what end (job-readiness, self-capitalisation, social cohesion, sustainability) are students being educated? While we in no way sought to answer such questions within the confines of this paper, we instead emphasise that even contemplating these more philosophical responses is increasingly difficult within a paradigm that defines teaching effectiveness solely through the capacity to improve student performance, and which obliges teachers to be beholden to data – that is, to be wholly representable as, and fully responsive to, data. These various processes of datafication have produced a teaching profession(al) thoroughly cast in the image of data, in which ‘objective’ renderings of effectiveness are constantly sought and internalised before subsequently being professed as ‘truth’, even while these datafied enumerations are necessarily imperfect. Perhaps what is most fascinating is that regardless of how complete (or otherwise) a picture such data may provide of teacher effectiveness, and irrespective of the lens of measurement being used, the datafied image of teaching and teachers, seen by others and themselves, will always be imperfect when seen through a glass, darkly.
Note 1. It should be noted that we are using the term ‘value-added model’ (VAM) broadly to include all such statistical tools that are used to measure teacher effects on student test scores. While other models, such as student growth measure models, are technically different in terms of their statistical procedures and properties, they operate similarly in their purpose of quantifying teachers’ influence upon student test score growth. For the purposes of this paper, we are concerned with the latter and thus will be using ‘VAM’ to capture all such models.
Disclosure statement No potential conflict of interest was reported by the authors.
16
S. LEWIS AND J. HOLLOWAY
ORCID Steven Lewis http://orcid.org/0000-0002-8796-3939 Jessica Holloway http://orcid.org/0000-0001-9267-3197
References Amrein-Beardsley, A. (2014). Rethinking value-added models in education: Critical perspectives on test and assessment-based accountability. New York, NY: Routledge. Amrein-Beardsley, A., & Holloway, J. (2017). Value-added models for teacher evaluation and accountability: Commonsense assumptions. Educational Policy. doi:10.1177/0895904817719519. Auld, E., & Morris, P. (2016). PISA, policy and persuasion: Translating complex conditions into education ‘best practice’. Comparative Education, 52(2), 202–229. Ball, S. (1994). Education reform: A critical and post-structural approach. Buckingham: Open University Press. Ball, S. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228. Desrosières, A. (1998). The politics of large numbers. (C. Nash, Trans.). Cambridge, MA: Harvard University Press. Foucault, M. (1980). Power/knowledge: Selected interviews and other writings 1972–1977. New York, NY: Pantheon Books. Foucault, M. (1988). Critical theory/intellectual history. (A. Sheridan, Trans.). In L. D. Kritzman (Ed.), Politics, philosophy, culture. Interviews and other writings of Michel Foucault, 1977-1984 (pp. 17–46). New York, NY: Routledge. Foucault, M. (1994). On the archaeology of the sciences: Response to the Epistemology Circle. In J. D. Faubion (Ed.), Aethetics, method and epistemology: Essential works of Foucault, 1954–1984 (pp. 297–333). New York, NY: The New Press. Gerrard, J. (2015). Public education in neoliberal times: Memory and desire. Journal of Education Policy, 30(6), 855–868. Grek, S. (2009). Governing by numbers: The PISA ‘effect’ in Europe. Journal of Education Policy, 24(1), 23–37. Grek, S. (2013). Expert moves: International comparative testing and the rise of expertocracy. Journal of Education Policy, 28(5), 695–709. Grek, S. (2017). Socialisation, learning and the OECD’s reviews of national policies for education: The case of Sweden. Critical Studies in Education, 58(3), 295–310. Hacking, I. (1986). Making up people. In T. C. Heller, M. Sosna, & D. E. Wellbery (Eds.), Reconstructing individualism: Autonomy, individuality and the self in Western thought (pp. 222–236). Stanford, CA: Stanford University Press. Hardy, I. (2013). Testing that counts: Contesting national literacy assessment policy in complex schooling settings. Australian Journal of Language and Literacy, 36(2), 67–77. Hardy, I. (2015). A logic of enumeration: The nature and effects of national literacy and numeracy testing in Australia. Journal of Education Policy, 30(3), 335–362. Hardy, I., & Lewis, S. (2017). The ‘doublethink’ of data: Educational performativity and the field of schooling practices. British Journal of Sociology of Education, 38(5), 671–685. Head, B. W. (2008). Three lenses of evidence-based policy. Australian Journal of Public Administration, 67(1), 1–11. Holloway, J., & Brass, J. (2018). Making accountable teachers: The terrors and pleasures of performativity. Journal of Education Policy, 33(3), 361–382. Holloway, J., Sørensen, T. B., & Verger, A. (2017). Global perspectives on high-stakes teacher accountability policies: An introduction. Education Policy Analysis Archives, 25(92), 1–18. Lewis, S. (2017). Governing schooling through ‘what works’: The OECD’s PISA for Schools. Journal of Education Policy, 32(3), 281–302. Lewis, S., & Hogan, A. (2016). Reform first and ask questions later? The implications of (fast) schooling policy and ‘silver bullet’ solutions. Critical Studies in Education. doi:10.1080/17508487.2016.1219961.
CAMBRIDGE JOURNAL OF EDUCATION
17
Lingard, B. (2011). Policy as numbers: Ac/counting for educational research. The Australian Educational Researcher, 38(4), 355–382. Lingard, B. (2016). Think tanks, ‘policy experts’ and ‘ideas for’ education policy making in Australia. The Australian Educational Researcher, 43(1), 15–33. Lingard, B., & Lewis, S. (2016). Globalisation of the Anglo-American approach to top-down, testbased educational accountability. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of human and social conditions in assessment (pp. 387–403). New York, NY: Routledge. Lingard, B., Martino, W., & Rezai-Rashti, G. (2013). Testing regimes, accountabilities and education policy: Commensurate global and national developments. Journal of Education Policy, 28(5), 539–556. Lingard, B., Sellar, S., & Lewis, S. (2017). Accountabilities in schools and school systems. In G. Noblit (Ed.), Oxford research encyclopedia of education (pp. 1–28). New York, NY: Oxford University Press. Lycett, M. (2013). ‘Datafication’: Making sense of (big) data in a complex world. European Journal of Information Systems, 22(4), 381–386. Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work and think. New York, NY: Houghton Mifflin Harcourt. Miller, P., & Rose, N. (2008). Governing the present: Administering economic, social and personal life. Cambridge: Polity Press. Nóvoa, A., & Yariv-Mashal, T. (2003). Comparative research in education: A mode of governance or a historical journey? Comparative Education, 39(4), 423–438. Ozga, J. (1987). Studying education through the lives of policy makers: An attempt to close the micromacro gap. In S. Walker & L. Barton (Eds.), Changing policies, changing teachers: New directions for schooling? (pp. 138–150). Milton Keynes: Open University Press. Ozga, J. (2016). Trust in numbers? Digital education governance and the inspection process. European Educational Research Journal, 15(1), 69–81. Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton, NJ: Princeton University Press. Porter, T. (2012). Funny numbers. Culture Unbound, 4, 585–598. Power, M. (1994). The audit explosion: Rituals of verification. London: Demos. Rose, N. (1999). Powers of freedom: Reframing political thought. Cambridge: Cambridge University Press. Rowlands, J., & Rawolle, S. (2013). Neoliberalism is not a theory of everything: A Bourdeiuian analysis of illusio in educational research. Critical Studies in Education, 54(3), 260–272. Sætnan, A. R., Lomell, H. M., & Hammer, S. (2011). Introduction: By the very act of counting: The mutual construction of statistics and society. In A. R. Saetnan, H. M. Lomell, & S. Hammer (Eds.), The mutual construction of statistics and society (pp. 1–17). Abingdon: Routledge. Saldaña, J. (2013). The coding manual for qualitative researchers. London: Sage. Sellar, S. (2015). A feel for numbers: Affect, data and education policy. Critical Studies in Education, 56(1), 131–146. Simons, M. (2015). Governing education without reform: The power of the example. Discourse: Studies in the Cultural Politics of Education, 36(5), 712–731. Smith, W. C. (Ed.). (2016). The global testing culture: Shaping education policy, perceptions and practice. Wallingford: Symposium Books. Thompson, G., & Cook, I. (2014). Manipulating the data: Teaching and NAPLAN in the control society. Discourse: Studies in the Cultural Politics of Education, 35(1), 129–142. Webb, P. T., & Gulson, K. (2012). Policy prolepsis in education: Encounters, becomings, and phantasms. Discourse: Studies in the Cultural Politics of Education, 33(1), 87–99.