HEA ReportSUBMITTED8May2008[1] - CiteSeerX

9 downloads 186 Views 555KB Size Report
May 8, 2008 - scale studies connecting across courses and/or institutions, ...... Skills Weekend' targeted mainly at non-traditional students. Although ..... discussion of problem-based learning innovations in oceanography and automotive.
The student learning experience in higher education Literature review report for the Higher Education Academy Hubert Ertl Geoff Hayward Susannah Wright Anne Edwards Ingrid Lunt David Mills Kai Yu

The student learning experience in higher education 1

The Higher Education Academy 2008

Acknowledgements The authors are very grateful to the project steering group—Miriam David, Graham Gibbs, Monica McLean—for their invaluable support and advice at different stages of the project. We would also like to thank colleagues at Oxford University Department of Education, particularly Harriet Dunbar-Goddet and Alis Oancea, for their suggestions and feedback. Most especially we would like to thank Stephanie Sturdy for her excellent work with literature searching, the EndNote database for the project, and proof-reading the report.

The student learning experience in higher education 2

The Higher Education Academy 2008

Table of Contents Executive Summary

5

Introduction

14

General background

14

Review objectives

16

The student learning experience: research and policy

16

Methodological approach to identifying, selecting and analysing the literature 22 Identifying the literature and selection processes

22

Method of analysis

25

Overview of included literature

27

Reflections on the methodological approach

29

Findings

31

1. Inventory studies

32

Overview of the research literature in this section

32

Students approaches to studying

33

Course experience questionnaire

35

Combining the approaches

37

Summary and implications

37

2. Action research

39

Overview of action research literature

39

Action research findings

42

Methodological issues

44

3. Induction and transition

46

Overview of the research literature in this section

46

Student expectations and preparedness for university study

47

The process of induction as preparation for HE

47

Competences and understanding required at university

48

Characteristics of ‘non-traditional’ students

49

The student learning experience in higher education 3

The Higher Education Academy 2008

Summary and implications

49

4. Teaching, curriculum and learning environments

50

Overview of the research literature in this section

50

Strategies to promote learning

52

Constructing learning environments

56

Summary and implications

58

5. Student perception of learning

60

Overview of the research literature on student perceptions of learning

60

Perceptions of learning by particular groups of students

60

Perceptions of learning by subject and teaching method

62

Perceptions of learning by developmental stage

63

Summary and implications

63

6 Assessment and feedback

64

Overview of the research literature in this section

64

Forms of assessment

65

Student experiences and perceptions of feedback and assessment

66

Summary and implications

68

7. Student learning experience as a measure of quality in higher education

69

Overview of the research literature in this section

69

Literature search strategies

69

Assessing student learning for quality assurance

70

Summary and implications

74

Discussion and implications

76

References

78

Appendix

101

The student learning experience in higher education 4

The Higher Education Academy 2008

Executive summary This document reports on a review of research on the student learning experience in higher education, funded by the Higher Education Academy. The review was conducted over six months by a team based at Oxford University’s Department of Education.

Methodology The review was conducted using a collaborative approach bringing together the experience of a number of researchers at the department (the report authors) and the input of a steering group of experts in the field. The review aims: • to identify and discuss different conceptualisations of the student learning experience in higher education as developed in the relevant literature; • to provide an overview and a critical analysis of methodological approaches employed in the literature to investigate the student learning experience; • to discuss interventions developed to improve the student learning experience. The report outlines the background of the topic under review. The student learning experience has become an important issue in the literature on higher education only comparatively recently, partly as a result of a new student-focused emphasis in the longer established discourse on quality and quality assurance in higher education. Changes in UK higher education, mostly notably rapid expansion of student numbers and institutions, and the recent move to tuition fees, are recognised as important starting points for the review. The review team adopted a systematic approach to identifying, selecting and analysing relevant literature. The review team conducted an initial electronic search of electronic databases, using a common set of search terms, such as “student learning experience”, “student learning”, “learning experience”, “learning context”, “learning environment”, and “higher education” and “university” (and lexical derivations of all these terms). This initial search resulted in a total of 523 items. Titles and abstracts were scanned and at this stage the review team excluded items that fulfilled one or more of the following criteria: • not about student learning; • not related to the UK; • not about higher education; • not about undergraduates. The electronic search was supplemented by systematic searching of selected websites, hand-searching of selected journals, snowballing, and searching for particular authors. Overall 256 items were identified for coding and review. Items were

The student learning experience in higher education 5

The Higher Education Academy 2008

selected on the grounds of relevance for the review, relevance and accessibility for target audiences. The literature selected for detailed review was analysed at three different levels: • content (main findings about the student learning experience); • methods (methods used to investigate, describe and discuss the student learning experience); • methodology (the overriding theoretical ideas that guided the research). The review team developed an analytical map as a heuristic tool to categorise the literature according to methods and thematic areas. This map was developed through a staged process, with members of the review team discussing emerging themes and methods on the basis of items read at ‘reading meetings’. Method

A) Reviews

B) Experimental

C) Inventorybased

Area 1) Induction/ transition

D) Actionresearch oriented

2

E) Evaluation

F) Descriptive

Total

15

21

38

2) Approaches to teaching

4

2

7

3

9

15

40

3) Curriculum development and resources

1

1

2

3

24

10

41

1

1

5

3

10

20

1

32

2

1

69

105

2

1

10

21

36

1

7

2

11

3

1

10

14

70

158

4) Constructing learning environments 5) Student perceptions of learning 6) Assessment and feedback

1

7) Quality assurance and enhancement

1

1

8) Other

Totals

9

4

50

14

The most common methods used were descriptive (158 studies), and evaluative (70). The 50 inventory-based studies formed a distinctive group. Of the thematic areas, more studies were related to students’ perceptions of their own learning (105) than any other theme. Other common areas include approaches to teaching (40) and curriculum development and resources (41).

The student learning experience in higher education 6

The Higher Education Academy 2008

Findings The findings are reported according to the categories in the analytical map. The first sub-section focuses on the distinctive group of inventory studies, the second focuses on action research studies for an in-depth discussion of methodological and conceptual issues pertinent to much of the literature. Subsequent sections are thematic, focusing on induction and transition, approaches to teaching and curriculum development, student perceptions of learning, assessment and feedback, and quality assurance and enhancement.

Inventory studies A total of 50 papers using inventories of some form or another, spanning the period 1990–2006, were reviewed. There were a large number of instruments (19), often derived from one another. Many studies focused on developing and refining these instruments, and assessing their validity in new contexts. While of methodological value, the contribution of these studies to an understanding of the student learning experience is limited. Three main groups of studies emerged: 1. studies relating to student approaches to studying; 2. investigations using course experience questionnaires (CEQ); 3. studies combining inventories with other qualitative approaches. 1. Assessing the value and quality of various studies in category 1 is problematic given the range of instruments used, their different purposes and the methodology employed. Some studies had a definite methodological objective, either as a primary or a secondary focus of the research. The best studies in this category identify challenging statistical issues, such as how to make valid comparisons of cross-product matrices in order to make valid comparisons between different groups of students in their approaches to study. 2. The CEQ has been used for over a decade as a performance indicator in Australian higher education, and is now being used increasingly in other countries. The CEQ has fed into curriculum development projects, for example evaluations of the introduction of problem-based learning (PBL). It has also been used to examine differences in experience between students taking different degree programmes. Studies in this category found a significant positive relationship between a graduate’s final degree classification and their scores on the CEQ. Experience of appropriate assessment, good teaching, clear goals and standards, good materials and good tutoring were all positively associated with increased attainment for these distance learning students. Students’ responses to the CEQ in studies in this category reflect the multidimensional nature of student learning patterns, similar to that found in North American studies on student evaluations of individual instructor effectiveness.

The student learning experience in higher education 7

The Higher Education Academy 2008

3. Studies in this category demonstrate what can be achieved using inventory-based approaches combined with qualitative research. Some of them attempt to assess the presage, process, product model of learning and estimates values for the strength of association between different elements in the model. Inventory-based studies provide a potentially useful assessment technology but there is limited vision about how this technology can be used to change or improve the student learning experience. More longitudinal and evaluation studies, which take account of student’s disciplinary, institutional and wider socio-cultural setting, are required for such an evidence base. For the future development of this area of research it also seems important to systematically build on findings of previous studies and, rather than replicating previous studies, to further develop methodological approaches with the student experience in mind.

Action research Fourteen papers were classified as action research. Most of these items report on small-scale interventions carried out at one institution in one programme of study or on one module. Most of the studies report on interventions carried out over one term or study year: several of the studies, though there are exceptions, are singular trials of a new idea without time for iterative phases of implementation and reflection. Constructivist notions of learning were common, although theoretical or conceptual underpinnings were rarely discussed explicitly. Three groups of studies were identified: 1. studies into approaches to teaching and curriculum development; 2. studies into constructing learning environments; 3. studies into assessment. Maximising dialogue between teachers and students and among students was a common theme in these studies. Mechanisms used to facilitate more collaborative approaches to teaching and learning include ICT-based discussion forums, concept maps, peer-assessment exercises, and group discussions. Overall, the studies reviewed suggest that collaborative work had a broadly positive effect on student learning in general and on developing critical thinking capabilities and academic literacy in particular. However, the limited scope of most studies and weaknesses in reporting and data-analysis mean that the evidence base is not always convincing.

Induction and transition 33 papers were reviewed on induction and transition. Most of the research is descriptive, based largely on questionnaire surveys administered to first-year undergraduates early in the year. The papers reviewed indicate that concern about student withdrawal, concern which has grown with the greater diversity of the undergraduate population, has led to a number of efforts to develop innovative

The student learning experience in higher education 8

The Higher Education Academy 2008

induction programmes, and to consider more carefully the implications of the transition from school or further education to higher education. Four foci were identified: 1. student expectations and preparedness for higher education; 2. the process of induction and specific innovative induction approaches; 3. the ‘gap’ between requirements at secondary and tertiary level within a subject; 4. non-traditional students and improving retention. 1.The reviewed research in this area suggests that the majority of students manage the transition from school to higher education successfully, though there is a significant majority for whom expectations of university did not match reality and who had inaccurate prior perceptions of higher education. Such studies emphasise the need for staff awareness of the qualities, attitudes and skills that students bring with them to university. 2. The literature reviewed in this category reports on different induction programmes and initiatives. A common finding is the need to go beyond ‘information transmission’ where students are in passive mode, to involve students actively in the induction process and to begin to develop generic or study skills. 3.Most of the studies in this category investigate the assumption that changing patterns of recruitment to higher education and widening participation require a increased awareness by higher education to develop a detailed understanding of the prior school experiences of school students and the skills and understanding that they bring with them. 4. Within the context of the widening participation agenda, research in this category has sought to investigate the specific situation and needs of different groups of ‘nontraditional’ students who enter higher education, such as mature and working class students. Most of the studies are small-scale and based in one institution. Larger scale studies are required, with more systematic approaches to evaluating both immediate and longer-term outcomes.1

Teaching, curriculum and learning environments The student learning experience has also been researched through studies examining approaches to teaching, curriculum and resource development, and the construction 1

For a more extensive review of literature on transition, adjustment and retention see Harvey, L., and Drew, S., with Smith M., (2005) The first-year experience: a review of literature for the Higher Education Academy pp. 66–82, available at: http://www.heacademy.ac.uk/assets/York/documents/ourwork/research/literature_reviews/first_year_experience_fu ll_report.pdf

The student learning experience in higher education 9

The Higher Education Academy 2008

of learning environments. These studies are addressed in one section, involving a review of 72 studies. Many are small-scale and focused on one course or module. Research in this area is, with rare exceptions, characterised by an absence of largescale studies connecting across courses and/or institutions, or longitudinal designs. Methods are mainly descriptive and evaluative. Two main themes were identified: 1.strategies to promote learning; 2. constructing learning environments. 1. Studies in this category include investigations of strategies such as group work, peer learning and problem based learning. Most of these studies draw on constructivist conceptualisations of learning. While these often, but not exclusively, draw on single cohort data, they are among the strongest studies on teaching and curriculum development that were reviewed. Evaluations of co-operative group work have focused more on the affective than the cognitive outcomes and have drawn more on student self-reports than measures of cognitive gain. These studies suggest that benefits to students include: emotional engagement and a sense of belonging; engaging actively rather than passively in learning and thereby promoting student autonomy and responsibility for learning, and enhancing student motivation. Some studies indicate that peer tutoring has moved from being seen as a way of coping with more students with less resource to being recognised as an opportunity for reciprocal learning among pairs of students. There was a considerable literature on problembased learning, particularly in medical and health care education. 2. Studies in this area centre on ICT and web-mediated learning and fall into two broad categories: those which focus on developing the technology to support student learning and those that see these new resources as an opportunity to rethink how students’ learning experiences can be enhanced. Technology-focused studies have necessarily reflected the increased complexity of the resources available. These studies discuss how resources can be tailored to create flexible and broad-based support to students. Studies on enhancing student learning take a longer-term view of the role of technology in higher education, examining, for example, pedagogic aspects of ICT-resource design and the impact of the web on the nature of learning communities. Some common findings emerge. First, collaborative modes of learning and opportunities for discussion appear to enhance the learning experience for at least some students, promoting motivation, autonomy, and enabling them to acquire transferable skills. Second, both tutors and students need guidance and support in introducing pedagogical innovations that might differ from previous experience and challenge prior expectations. These innovations might, therefore, prove resourceintensive. Third, these studies indicate the difficulty of altering individual courses while institutional environments, norms and power relations stay the same.

The student learning experience in higher education 10

The Higher Education Academy 2008

Student perceptions of learning Over 60 non-inventory studies on ‘student perceptions of learning’ were reviewed, covering a wide variety of topics and themes. All the papers within this category are descriptive, predominantly using surveys and interview methods. The weaker ones classify student learning experiences on mono-dimensional scales of quality/satisfaction/depth of learning, in a way that parallels the inventory-based research. Many are primarily based on samples from a single institution or from a single course. There were three broad sub-groups of research: 1. perceptions of learning by particular groups of students; 2. perceptions of learning by subject and teaching method; 3. perceptions of learning by developmental stage. 1. Many papers in this category explore the learning experiences of particular categories of students who face particular challenges, who have historically been disadvantaged, or who perceive themselves as discriminated against (including mature students, female students, low-income and working-class students, and students living at home). 2. A smaller group of papers focuses on student learning in particular subjects, pointing to some of the specific issues faced by those teaching the disciplines, and student’s perceptions and experiences of particular methods (including problem solving, the ‘Oxford’ tutorial, and e-learning). 3. This group of papers explores the importance of temporal staging of the student learning experience. A number address the first year learning experience, whole others examine the way in which students do (or do not) perceive their skill development over the course of a degree. Subject-specific and topic-specific findings could be particularly useful for informing policy and practice: for example, supporting students with disabilities. Some of the emerging findings from research into the relationship between paid work, debt and student learning are also important. The strongest papers are those that triangulate different quantitative and qualitative methodologies, bring together research spanning more than one institution, or bear extrapolation beyond a single case study itself.

Assessment and feedback Twenty-seven studies were reviewed for the section on assessment and feedback, mostly small-scale and related to one course and institution, using descriptive or evaluative designs. There were two main categories of studies, relating to: 1. different forms of assessment and how these relate to student learning; The student learning experience in higher education 11

The Higher Education Academy 2008

2. student experiences and perceptions of feedback and assessment. 1. Studies in this category discuss different forms of assessment, including continuous assessment and peer and self-assessment. Continuous assessment provided students with frequent feedback and an opportunity to improve on their work, but entailed a heavy tutor workload. Studies of peer assessment reveal that with appropriate guidance peer assessment could enhance their ability to evaluate and think critically, but some researchers raised doubts about validity. 2. There is some consensus over what students want out of feedback (timely, clear, constructive, with individualised comments), but obstacles in meeting students’ expectations (time constraints on tutors, institutional procedures) and difficulties and sometimes contradictions around the way students engage with feedback. The better studies in this section, though small-scale, are often sensitive to the institutional environment and complexities of students’ attitudes and actions. Some of the research on feedback, in particular, provides a model for examining sensitively students’ perspectives on this important aspect of the student experience, and how these perspectives are linked with institutional procedures, priorities and power relations. A common substantive finding is that assessment can dictate how students approach their learning, that students focus on what is assessed. Yet studies reveal a number of potential obstacles in the way of ensuring that assessment and feedback enhance the student learning experience. The assessment practices described in studies and the forms of feedback desired by students are resource intensive. Institutional priorities of ‘objective’ summative measures might be in tension with utilising assessment and feedback to benefit student learning.

Student learning experience as a measure of quality in higher education The final sub-section reported on items concerned with investigating the student learning experience in the context of quality assurance in higher education. To capture the wide, international, discourse on quality it was necessary to use an additional search strategy for this section. This section discusses a number of instruments developed for assessing students’ experience of learning for quality assurance purposes, and the ways in which findings from these instruments are and can be used. Most of the earlier work in this field took place in the US and Australia and prepared the ground for the development of instruments in the UK. Studies suggest that using student experience as a quality indicator can is a useful addition to traditional standards of quality in higher education, related primarily to reputation and resources. The use of instruments assessing student experience can lead to improvements in teaching performance and student learning, but that the basis

The student learning experience in higher education 12

The Higher Education Academy 2008

for using these indicators for making comparisons across institutions is limited. There is some consensus that, compared o other quality indicators, students’ evaluation is valid, reliable, stable and useful in assessing the quality of higher education. The research reviewed indicates that collecting student evaluation data through questionnaire surveys or through other mechanisms can yield useful information and can help institutions to understand students’ experiences and needs. As long as surveys are not driven by a narrow quality assurance agenda, institutions can use the knowledge they gain from student evaluation data to examine institutional policies and practices, to identify weak areas in their provision, and to improve the quality of student learning.

Conclusion The report concludes with suggestions for future research. Recommendations include the need for funding to support long-term, cross-institutional studies for exploring and developing appropriate concepts and rigorous methods that take account of the rapidly changing student learning experience in higher education. The need to build researcher capacity in this area, including supporting collaboration between subjectspecialists and researchers in the field of education, is also identified.

The student learning experience in higher education 13

The Higher Education Academy 2008

Introduction General background The student learning experience is currently high on the political and policy agenda. Policy-makers and bodies such as the Higher Education and the Quality Assurance Agency for Scotland identify it as a top priority, while it is prominent in the mission statements and promotional literature of individual higher education institutions. This interest in the student learning experience has a varied provenance, related both to institutional and systemic change within the higher education sector, and broader political agendas. Within the sector, growth in student numbers and an increase in the number and types of institutions offering higher education courses since the 1960s (following the recommendations of the Robbins report (Committee on Higher Education 1963)) has fundamentally changed the composition of the student body and institutional landscape. In Trow’s analysis, such expansion is linked to changes in curriculum, teaching and learning that would inevitably have consequences for student learning (Trow 1974, 2006). Another important influence has been the growing institutional concern with student ‘satisfaction’, associated with the growing market competition between universities. There are also broader political imperatives, including linking education and economic growth, and marketisation and public service reform reflected, for instance, in the recommendations of the Dearing report related to flexible, modular provision, quality assurance and accountability, and increased student financial contributions to their higher education (National Committee of Inquiry into Higher Education 1997). Alongside these political and financial concerns there are associated developments such as the rise of a personalisation agenda (David and Clegg 2006). In the current decade, widening participation policies and the advent of tuition fees have, arguably, altered the nature of the student experience and of student learning. These concerns have combined to focus attention on the changing experience of the student. In addition to policy, interest in student learning is also shaped by different intellectual traditions. The domains of research and practice have important links. Many of the educational researchers, as Paul Ramsden, Liz Beaty and Graham Gibbs, who made theoretical contributions to this field are now in strong positions to influence higher education policy, particularly in the UK. The immediate rationale for this review can also be traced to the adoption of the term ‘student learning experience’ as a core part of the Higher Education Academy’s mission under the leadership of Paul Ramsden, and the need for the sector to develop and share a clearer conceptual understanding of these concepts.

The student learning experience in higher education 14

The Higher Education Academy 2008

Yet what is meant by student learning experience is rarely subjected to discussion or clearly defined. In its Strategic Plan 2005–2010 the Higher Education Academy identifies the student learning experience as a key driver of its work. What this means for the focus of energy and resources is less than clear: “strategies for retention, the effective uses of e-learning, the development of enterprise capabilities, and support for excellent research training environments” being among the many areas to be considered (HEA 2005, p.3). This ambiguity is mirrored among the research community. The student experience network of the Society of Research in Higher Education aims “to find out what … students in UK higher education are learning in the widest sense of the word from their experiences within and outwith formally institutionalised study.”2 Others however, confine attention to the formal academic curriculum and to learning in the higher education classroom. A range of research traditions, themes, and methods have been drawn on to explore the student learning experience. Within educational development debates there has also been a shift in pedagogical perspective from a focus on what constitutes good teaching (including the debate on teaching methods in higher education and training and professional development of higher education teachers) to a focus on learning (Gibbs 2003). This has led to a highly diverse and varied research base, which is mapped and analysed in this report. Therefore, a review of the literature related to students’ learning experience is a timely venture. The Higher Education Academy’s invitation to tender allows reviewers to take account of these changing perspectives through “[making] explicit the issues and concepts relevant to [and] enabling and inhibiting factors that impact on the student learning experience” and examining “how the student learning experience can be defined” (HEA 2006a, p.2). The invitation to tender also indicates a lack of reviews of this area of research. Meta-analysis in a wide-ranging and changing area is a difficult task. There is neither agreement on underlying definitions and conceptualisations, nor is there a clear understanding of how the student learning experience can be measured and evaluated. This review attempts to identify different traditions of research that broadly speaking offer perspectives on the student learning experience, and to investigate the factors that impinge on that experience. Student learning is affected by all the changes noted above and the concept of ‘learning experience’ must be seen as a dynamic one. Consequently this review explores the development of the interest in the student learning experience and how changing conceptualisations of learning (and teaching) are embodied in the research. This report investigates the consequences these conceptualisations have on student experience in higher education, how they have changed, and how they have been adapted to meet the needs of a changing student body. This review has, therefore, in addition to considering in detail the nature of the student learning experience, also focused on the changing conceptualisation of ‘student 2

http://www.srhe.ac.uk/networks.sen.asp.

The student learning experience in higher education 15

The Higher Education Academy 2008

learning’ in higher education and the methods used to investigate student learning and experience. Against this background the investigation and representation of the higher education experience of students in the relevant literature has been traced, analysed and mapped.

Review objectives 1. Conceptualisation of student learning The review aims to provide an overview of the ways in which student learning experience in higher education has been and is conceptualised. It evaluates the explanatory potential of these conceptualisations for the purposes of developing policy and practice. In so doing the review pays attention to the increasing diversity of learners entering higher education in so far as this diversity is reflected in the literature. It also pays attention to the increasing diversity of institutions offering higher education programmes through a variety of different pedagogies, and the wide spectrum of courses ranging from more traditional academic subjects to professional and vocationally-related courses. 2. Interventions Given the above, a particular aim of the review is to provide an overview of interventions aimed at producing a more effective student learning experience. The review evaluates critically the results from such interventions taking due heed of the quality of the research design in the criteria for including evaluations of such interventions in the review. 3. Research methods and methodological approaches The review aims to provide an overview of methodological approaches adopted in the literature to investigate the student learning experience in higher education and to assess the appropriateness of different methodologies to answer different research questions in different contexts. Therefore, the review categorised systematically the different types of research methodologies that are being employed to investigate learning processes in higher education and assessed how successful they are in producing relevant knowledge for use by practitioners and policy makers.

The student learning experience: research and policy One could identify two broad ‘traditions’ of research into student learning. The first, and the most influential, referred to by many as the ‘approaches to learning’ research, derives originally from the field of educational psychology. It is a field that was strongly influenced during the 1970s by work carried out by Marton and his students, adopting

The student learning experience in higher education 16

The Higher Education Academy 2008

a qualitative methodology that was subsequently labelled ‘phenomenography’. This tradition has evolved and developed. It has been particularly influential because of the close links between researchers and those involved in educational development. Much of the literature discussed in this report can be classified within this first broad category. The second, and much less coherent, field has been that of sociologically-informed research into the factors influencing learning that lie behind the immediate teacherstudent interaction. This includes work on the informal and formal academic, institutional and disciplinary ‘cultures’, and how they shape students learning experiences within higher education. More recently, this broad field includes an increasing amount of disciplinary-specific scholarship and pedagogic research, some of which has been promoted with the Scholarship of Teaching and Learning (SOTL) movement sparked off by Ernest Boyer’s influential Scholarship Reconsidered (1990). A few scholars have explored the possibility of bringing these two rather different approaches together (Ashwin and McLean 2005), but on the whole there is relatively little dialogue between them.

Educational research into student learning Whilst there is a quantitative tradition of research in the United States that seeks to explore student learning by measuring the impact of college on students’ intellectual growth and psychological development (Terenzini 2005), histories of research into student learning often start with the work of Perry, a Harvard based psychotherapist and counsellor (Perry 1968). Influenced by Adorno’s work on the authoritarian personality, Perry developed a programme of qualitative research with students into their attitudes to learning. It led him to develop a nine-stage model of intellectual and moral development. Because he did not hold an academic position, his work only became influential among educational researchers when rediscovered in the 1980s. Since the 1970s, the educational study of student learning has been strongly influenced by a small group of scholars, several of whom first gathered at the Department of Education in Gothenburg University. They began meeting and working with others from Australia and the UK, at conferences in Lancaster and elsewhere in Europe. Their research, and their links to educational development units where their findings have often been put to use, have defined the field for more than 30 years. This history is key to understanding the current shape of research into higher education and the questions today’s researchers raise. It is a field that has a number of roots, including that of Perry’s work, but its emergence is probably most accurately credited to Marton and his student Saljo. Marton’s research group at Gothenburg developed the radical idea that one could categorise qualitative differences in the outcomes of student learning. They attempted

The student learning experience in higher education 17

The Higher Education Academy 2008

to categorise the different levels of and understanding shown by students recalling the contents of an academic article. Drawing on careful empirical research, they argued that these different levels, which they called ‘deep’ and ‘surface’ approaches to learning, depended both on what was being learnt, but also on the environment in which students were learning. Marton and his collaborators (e.g., Marton and Saljo 1976) argued that one could classify student’s responses to reflect their qualitatively different ways of understanding the text—along a nested hierarchy, so that each higher level of understanding included the attributes of those below. Through a careful process of independent adjudication of responses, they identified a pattern whereby some students adopted a ‘surface’ approach to learning, while others displayed a more intentional and ‘deep-level’ approach to understanding. The two approaches seemed to correspond to different conceptions of learning and the role of the learner. Marton and colleagues saw an important distinction between their work and notions that students had identifiable and inherent ‘learning styles’ (Kolb 1984). Over time, Marton sought to define the distinctiveness of the approach his group had developed, sometimes in response to criticisms of the Gothenburg school. The use of ‘phenomenography’ (a derivative of phenomenology) to provide a conceptual underpinning to this approach occurred subsequently (Marton 1981). In contradistinction to ethnography’s attention to individual context and nuance, the phenomenographic approach seeks to objectively classify observed differences in individuals’ perceptions and descriptions of their learning. The approach is almost entirely interview-based; there is no attempt to observe or participate in people’s lived experiences of learning. Entwistle, an educational psychologist with a background in developing inventorybased scales for measuring student achievements, met Marton in 1975. Newly appointed editor of the British Journal of Educational Psychology, Entwhistle was quickly convinced of the importance of this new approach. This led to the publication of a series of seminal articles in the journal (e.g., Marton and Saljo 1976). Entwistle took up and developed these ideas at the Department of Education at Lancaster. Through a series of international conferences (initially organised by the department’s information officer Hounsell) he created effective collaborations with scholars from Australia and South Africa. Biggs arrived in Lancaster in 1979 with his own factor analysis inventories for measuring the study process. Working together, Entwistle and Biggs combined their scales and borrowed each other’s items. They began to develop a common understanding of the extrinsic and intrinsic factors at work in what they increasingly saw as a learning system or environment. Later, Ramsden also worked at Lancaster, and applied this social research to develop a Course Experience Questionnaire (CEQ). He used the findings to inform policies on improving teaching at universities, and also his own influential introductory text (Ramsden 2003 [first edition 1992]) It would be too simplistic to paint a picture of a neatly consensual intellectual history. At a conference in 1983 celebrating Marton’s work, tensions between intellectual

The student learning experience in higher education 18

The Higher Education Academy 2008

generations, and between theorists and practitioners, were reputedly resolved through a cathartic football match (Entwistle personal communication.). The conference led to the publication of the influential The Experience of Learning (Marton, Hounsell and Entwistle 1984). Gradually, the circle of scholars adopting this ‘approaches to learning’ approach widened and its influence spread. Perry, who had conducted some of the earliest interviews with students, was invited to speak at Lancaster, and his early work became better known. A group of Australian scholars, including Prosser and Trigwell, also developed this field. They began to interview university teachers to explore what they called the ‘outcome space of learning’, exploring specific aspects of conceptions of the teaching environment (Prosser and Trigwell 1999) and how these related to learning. Much of the work discussed in this review relies on questionnaires and inventories informed by this methodological and epistemological background. Most tends to accept, rather than directly challenge, the fundamental principles of deep and surface learning laid out by Marton and Saljo in the mid 1970s. More recently critiques of this work have emerged (e.g. Webb 1997; Malcolm and Zukas 2001; Haggis 2003), with more sociologically-informed researchers asking questions about the ease with which a deep-surface dichotomy can be applied across a diversity of contexts, and the often repetitive nature of the research findings. The model has also been criticised for its static nature, and the difficulty of representing dynamic changes in student conceptions.

Sociological research into academic cultures and disciplinary identities Sociological research into how university academics and students understand learning and assessment was given an influential head-start in the work of Howard Becker and his colleagues in the 1960s (Becker et al. 1968). Using qualitative surveys and interviews, they pointed to the very different understandings and experiences of assessment and grades held by staff and students. However this field remained rather undeveloped within sociology, and their work tended not to be cited by those within education. From a more structuralist perspective, the work of Pierre Bourdieu (1988) on academic hierarchies and ‘habitus’ has been taken up by sociologists of higher education, as has the work of Basil Bernstein on how curricular knowledge relates to identities and relationships in educational contexts (e.g., Bernstein 1996). Coming out of work on the anthropology of literacy, there is a growing interest in ‘academic literacies’ and student participation in academic discourses, and their implications for students beginning their study at universities (e.g., Lea and Street 1998; Northedge 2003). Early anthropological work on apprenticeship has also been influential (Lave and Wenger 1991). There is an increasing interest in ‘communities of

The student learning experience in higher education 19

The Higher Education Academy 2008

practice’ (most noticeably in the literature on e-learning). However some of this literature is rather under-theorised and can risk underplaying institutional and pedagogic relations of power. Sociologists of knowledge have long been interested in the role that academic cultures and disciplinary identities play in shaping knowledge production—a theme that Becher’s (1989) influential Tribes and Territories brought to an educational audience. Linked to this there is also a growing field of disciplinary-specific literature on student experiences of learning the discipline, often conducted using the methods of sociology. Some of this is developing within the emerging field of the scholarship of teaching and learning (Huber and Morreales 2001). As part of this movement to encourage teachers to reflect on disciplinary pedagogies, there is also a growing volume of ‘pedagogic research’, much of which is primarily used to inform practice. Within this broad category of sociologically-informed work, one could also include more critical and reflective approaches to studying pedagogy and learning that draw on traditions of social theory (McLean 2007; Mann 2001; Northedge 2003). There are signs that recent work commissioned by the Economic and Social Research Council’s (ESRC) Teaching and Learning Programme (TLRP), such as within the Social and Organisational Mediation of Learning (SOMUL) project (e.g., Houston and Lebeau 2006), is likely to reinvigorate and consolidate this second tradition of research, ensuring that a contextualised understanding of student ‘experience’ becomes more common within the field (TLRP, 2007). Research around concepts such as academic literacies (Lea and Street 1998) and threshold concepts (Meyer and Land 2003) is also increasingly influential.

The political discourse: student learning experience and quality in higher education Probably the most significant aspect of the political debate of the student learning experience relates to changing notions of quality in higher education. In the early twenty first century many countries have are approaching the universal stage of higher education described by Trow (1974) leading to huge changes in the higher education landscape. The rapid expansion of higher education has been accompanied since the 1980s by a growing political and public interest in understanding and assessing the comparative performance of universities (Nordvall and Braxton 1996). Indicators of quality have been established for this reason. Internal and external quality assurance systems have utilised measures of quality associated with reputation and resources, while student-centred indicators such as student learning and development have often been ignored. (Astin 1991a, 1991b; Harvey and Green 1993; Harvey and Knight 1996; Lomas 2002).

The student learning experience in higher education 20

The Higher Education Academy 2008

There are some indications that the view that quality assurance in higher education should take the students’ perspective into account has gained wide acceptance (Coates 2005), though there are also still signs of the persistence of older views of quality as defined by Astin and others. In England, various initiatives from the mid-1990s onwards demonstrate that higher education quality is increasingly seen in connection with the experience of students in higher education. Only some of these initiatives can be mentioned here (see Gibbs 2003 for further discussion). In 1996, the Higher Education Quality Council (HEQC) produced Guidelines on Quality Assurance, addressing the need to take student experience into account in assessing institutional quality (see Harvey et al., 1992), and recommending using student feedback to monitor and evaluate teaching and learning (Higher Education Quality Council 1996). Almost at the same time, the Higher Education Funding Council for England (HEFCE) commissioned a review of efforts to improve learning and teaching in higher education (Gibbs 1997). In line with these previous initiatives, in 2001 the Task Group chaired by Professor Sir Ron Cooke published Information on Quality and Standards in Higher Education (Cooke 2002, Higher Education Funding Council for England 2003), suggesting a new approach to quality auditing in higher education, including seeking “information on student satisfaction with their higher education experience”. In its 2005 annual review document HEFCE advocated a focus on the student to “ensure that all higher education students benefit from a high-quality learning experience meeting their needs and the needs of society” (HEFCE 2005). In 2006 the Higher Education Academy, after extensive consultation with the sector, published the UK Professional Standards Framework for Teaching and Supporting Learning in Higher Education, mandating that higher education staff understand, support and promote the “student learning experience” (HEA 2006b).

The student learning experience in higher education 21

The Higher Education Academy 2008

Methodological approach to identifying, selecting and analysing the literature Identifying the literature and selection processes The primary focus of the literature search was on materials produced since 1992. Key publications were located for the years prior to 1992. The search focused on published research and conference papers. Although it was recognised that doctoral theses could potentially be a useful source of research evidence – and an initial search of Index to Theses3 confirmed this expectation—they were excluded from this analysis because they are not easily accessible to potential readers of this review. For the initial search stage, the review team used a common set of search terms with a range of popular electronic databases. British Education Index (BREI) was searched for educational published material. Educational Resources Information Center (ERIC) was not included in this initial search following expert advice that the UK research it contained was included in the BREI. The review team also conducted searches in more general social science databases, Applied Social Sciences Index and Abstracts (ASSIA), Scopus, and International Bibliography of the Social Sciences (IBSS), in order to locate published material in non-education journals. Education-line was searched for unpublished ‘grey literature’ that was accessible to potential readers. In the first instance, combinations of the following search terms were used: “student learning experience”, “student learning”, “learning experience”, “learning context”, “learning environment”, and “higher education” and “university” (and lexical derivations of all these terms). The review team conducted mainly free-text searches to minimise the chances of missing relevant items (many items did not designate higher education or tertiary education as a level signifier). Later the review team looked for particularly prominent authors, including those identified in the first stages of the search (see below). Search terms, database searched, and number of items were recorded on search log forms and generated an initial list of ‘hits’, eliminating items which were identified on more than one database. This initial search identified 255 items in BREI and 238 items in Education-line. The review team identified over 30 items in ASSIA, Scopus, and IBSS. These totals do not include overlaps. The review team then scanned titles and abstracts from this initial list to decide whether items should be included or excluded in the review. In some cases a quick reading was required to make this decision. Items were excluded that were: 1) not about student learning; 2) not related to the UK; 3

www.theses.com.

The student learning experience in higher education 22

The Higher Education Academy 2008

3) not about higher education; 4) not about undergraduates. These codes for rejecting items were recorded against items on the initial list, and items that met these initial criteria for inclusion were recorded in an EndNote database. The initial list of over 523 was reduced to 196 items (99 Education-line, 74 BREI, 14 ASSIA, 3 IBSS and 6 SCOPUS). Alongside this initial search of electronic databases the review team conducted a systematic search of selected websites: • Current Educational Research in the UK • Department for Education and Skills (Research Reports) • Higher Education Funding Council for England • Universities UK • Economic and Social Research Council • Higher Education Academy • Higher Education Statistics Agency • Learning and Teaching Support Network • Society for Research into Higher Education • Department for Employment and Learning Northern Ireland • Institute of Learning and Teaching • National Union of Students • Quality Assurance Agency for Higher Education • GuildHE • Welsh Funding Councils • Association of University Teachers • Foundation Degrees Forward • National Association of Teachers in Further and Higher Education Hand-searches of journals for the years 2004 to the present were also conducted. Journals were identified from the list of ‘heavily cited’ journals identified by Tight (2006, p.48): • Active Learning in Higher Education • Assessment and Evaluation in Higher Education • Higher Education • Higher Education Management • Higher Education Policy • Higher Education Quarterly • Higher Education Research and Development • Higher Education Review • International Journal for Academic Development • Journal of Further and Higher Education • Journal of Geography in Higher Education • Quality in Higher Education • Research in Higher Education • Studies in Higher Education

The student learning experience in higher education 23

The Higher Education Academy 2008

• •

Teaching in Higher Education Tertiary Education and Management

A second search stage included: 1. searches on bibliographic databases using the names of key authors identified from the initial search stage and the prior knowledge of the research team; 2. authors and individual studies recommended to the review team (for instance, by the members of the Steering Group). The exclusion criteria noted above (1–4) were also applied in these additional search strategies and also during the later reading and coding stages described below. A total of 256 items were coded and read for the review. The review team used a different search strategy, outlined in the findings section, to locate additional items related to quality assurance and enhancement. The review developed a three-stage selection process. Items that met each of the following three stages of selection were included in the detailed review. Some of these were excluded at a later stage, when it became clear in the analytical process that they did not fulfil all selection criteria. 1. Stage 1: relevance for the review: Is the source clearly aimed at understanding learning processes and the student experience in higher education? Exclusion criteria (1–4) guided selection at this stage. The decision to include only UK-based studies and those relating to undergraduate learning was taken on grounds of manageability (and also the Higher Education Academy’s stated concern with the UK situation). The Review draws selectively on international literature, however, in the discussions on research methodology and conceptualisations of learning. 2. Stage 2: relevance and accessibility for target group: Is the source relevant and accessible – to policy makers, researchers and practitioners? Items that met the initial inclusion criteria were evaluated for clarity of reporting and the ‘usefulness’ of results. Ease of access for target groups was considered at this stage and the decision not to include doctoral theses was taken for this reason. 3. Stage 3: internal coherence: Does the strategy of investigation warrant the conclusions reached? Items with obviously inappropriate research designs were excluded. This was the most difficult criterion to apply consistently. If there were doubts about how to judge the appropriateness and consistency of research design the decision at this point was always to include the item for further analysis. This strategy proved helpful for the review project. The shortcomings of individual studies highlighted some of the problems of doing research on the student learning experience and were helpful to assess and discuss the strengths and limitations of different research approaches. Therefore, only very few studies were excluded on the basis of this criterion alone.

The student learning experience in higher education 24

The Higher Education Academy 2008

Stage 1 was the main selection process during the literature searching stage, although considerations of target group relevance and internal coherence were also kept in mind. Stages 2 and 3 were dominant during coding and reading, although items were also eliminated at this stage on the grounds of relevance where a more detailed reading indicated that they did not match the inclusion criteria (1–4). Out of the 256 items that were coded and read, a smaller number was selected for more detailed review. These decisions were partly on the basis of quality of research methodology and reporting and relevance according to criteria outlined above, and partly in order to include an indicative selection of literature for the review.

Method of analysis This review aimed to analyse research about the student learning experience in higher education at three, interconnected levels: Micro level: content At this level, the review outlines and critically discusses the main findings of selected studies on students’ learning experience in higher education. Meso level: methods At the meso level, the review discusses the methods used in the relevant literature to investigate, discuss and describe students’ learning experience in higher education. It contrasts the ways in which different methods were used in different contexts to investigate, discuss and describe learning processes and experiences. Macro level: methodology At this level, the review looks for the overriding theoretical ideas that guided the selection and conceptualisation of methods used to investigate, discuss and describe students’ learning experience in higher education. For reasons of simplicity this level is not split up further, for instance to the levels of (research) methodology and the wider paradigmatic values and beliefs on which methodologies are developed. Such methodologies, using the term in a wider sense, are bound to normative frameworks about learning processes. The analytical framework for items eventually included was devised collectively through review team discussions (the authors of this report). Initial impressions from the first readings were discussed in ‘reading meetings’. In preparation for these meetings, members of the review team were allocated a selection of items to read, including some to be read by the whole team, in order to identify common themes and research methods and discuss possible categorisations. This was an iterative process: the final analytical map used for the review was devised after three such meetings and also took into account the advice received in the first meeting of the Steering Group. The map was constructed as a grid (Figure 1), with research methods

The student learning experience in higher education 25

The Higher Education Academy 2008

along the top and areas of research along the side. The map was refined and changed continuously as the analytical work of the review team progressed. Method

Reviews

Experimental

Inventorybased

Area Induction/ transition

Actionresearch oriented

Evaluation

Descriptive

Approaches to teaching Curriculum development and resources Constructing learning environments Student perceptions of learning Assessment and feedback Quality assurance and enhancement Other

Figure 1: Analytical map

Reviews, experimental designs, evaluations and descriptive studies are standard methodological categories used in educational research and in systematic reviewing in the natural and social sciences. The review team understood review, evaluation and descriptive study on similar lines to those delineated by the EPPI-Centre for classifying education research (EPPI-Centre (2003)) (although the EPPI guidelines for detailed coding were not applied). • Review: studies that aim to draw together information, findings and conclusions from a range of previous reports. • Evaluation: studies that evaluate a practice, programme or other intervention by assessing how it works (for instance assessment of acceptability, feasibility, financial or other resource implications, or effects on educational outcomes). • Descriptive: studies for which the main aim is to produce a description of a state of affairs or phenomenon, or to documents its characteristics. The review team understood experimental to refer to studies in which the researcher attempts to establish if something causes an effect through measuring the effect of a researcher-manipulated independent variable (such as programme or teaching method) on a dependent variable (such as grades). The student learning experience in higher education 26

The Higher Education Academy 2008

It was necessary for the review team to make a judgement about the methodology even where authors described their own research approach. Sometimes the review teams’ judgements differed from authors’ descriptions. Action-research and inventory research are not research methodologies in the same way and are not included in the EPPI-Centre classification. However, they proved to be important and distinctive approaches to research in this area. Therefore they were singled them out for separate discussion in substantive sub-sections at the start of the findings section. Inventory studies are ubiquitous in the dominant approaches to teaching and learning strand of research, and form a distinct methodological category that requires detailed analysis. Action-research items form a sub-group of the many small-scale, single institution studies examined for this review and provide a valuable opportunity for an in-depth analysis of conceptual and methodological issues this type of research raises. These items exemplify the potential value of studies on student learning experience but also demonstrate very clearly the limitations of studies conducted over a short time-scale and with limited resources. The review team decided to identify fairly broad thematic areas in order to make the analytical map easy to use for the review team and easy to understand for readers. How these themes were interpreted is described in more detail in the findings section.

Overview of the included literature All items in the database were then coded according to this map (for example induction/transition, descriptive; approaches to teaching, evaluation). Items that had already been distributed for reading were coded by the member of the review team who had read that item, the remainder were coded on the basis of a quick reading by the research officer. Items were divided between members of the team for more detailed reading and reviewing, taking the particular expertise of the team members into account. Where appropriate, initial coding was revised in the process of more detailed analysis. Coding was seen as a heuristic and analytical devise, a way of thinking about and evaluating ‘bodies’ of research. The map itself was also revised during this process. At times it was difficult to differentiate between different areas. Some research addressed more than one theme (some studies, for instance, aimed to link teaching and curriculum with student perceptions of learning). Also, there was much overlap between areas; most notably approaches to teaching, curriculum development, and learning environments. An overview of the included literature is most easily achieved using the analytical map (Figure 2). This version of the map includes numbers for lines and characters for

The student learning experience in higher education 27

The Higher Education Academy 2008

columns as these are used for making reference to specific areas of the map in some of the findings sections. Method

A) Reviews

B) Experimental

C) Inventorybased

Area 1) Induction/ transition 2) Approaches to teaching

D) Actionresearch oriented

2

E) Evaluation

F) Descriptive

Total

15

21

38

4

2

7

3

9

15

40

1

1

2

3

24

10

41

1

1

5

3

10

20

5) Student perceptions of learning

1

32

2

1

69

105

6) Assessment and feedback

1

2

1

10

21

36

7) Quality assurance and enhancement

1

1

7

2

11

3

1

10

14

70

158

3) Curriculum development and resources 4) Constructing learning environments

1

8) Other

Totals

9

4

50

14

Figure 2: Analytical map with numbers

The total of studies accumulated from the numbers in individual cells exceeds the number of items examined. The surplus is accounted for by items where there was overlap between thematic areas (most commonly between curriculum development and resources and approaches to teaching), or where it was difficult to identify a single method (for instance, it was often difficult to decide whether a study was descriptive or evaluation, particularly if the evaluation strategy was fairly weak, and some studies clearly used more than one method). Overall, descriptive methods were by far the most common (158) followed by evaluations (70). Inventory-based approaches (sometimes combined with descriptive methods in a single study) were also common (50) and formed a distinctive group, concentrated around the thematic area of student perceptions of learning (32). Action research studies were another distinctive, though smaller, methodological group, this time distributed more evenly across thematic areas (14). There were few reviews (9) and experimental methods were extremely rare (4).

The student learning experience in higher education 28

The Higher Education Academy 2008

Of the thematic areas, the review team found more articles related to student perceptions of learning than any other theme (105) of which the majority (69) were classified as descriptive. Other common themes were approaches to teaching (40) and curriculum development (41). There was considerable overlap between these thematic areas, with a number of studies addressing both. Thirty-six studies that addressed assessment and feedback and 38 for induction and transition were identified. Only 11 studies were coded quality assurance and enhancement. This is part of the reason why a different and wider search strategy to investigate this important area was adopted (see section 7 of findings). The review team designated 14 studies ‘other’ as they did not fit neatly into the designated thematic areas. The inventory-based studies coded other were associated with methodological development of inventory tools more than student learning per se and are discussed in the relevant section of the findings. Other thematic areas in this category addressed learning obliquely rather than directly, including the perceived impact of part-time work on students, and retention and attainment.

Reflections on the methodological approach The methodological approach adopted produced a workable map of research in this broad and ill-defined area. Although not exhaustive, it gives a useful indication of dominant and less dominant methodological approaches and thematic areas. The process of mapping items also made it necessary to think about the methodological approaches adopted in the research on student learning experience. This made it possible to fulfil one of the review objectives outlined in the introductory section of this report. The approach to analysing research methods made it possible to evaluate the conceptual underpinnings of research. Moreover, this approach made it possible to draw disparate studies together in one place. It is hoped that the map is of value to those seeking an overview of research in this area. Target groups could expand on the findings of this review in the original literature cited, or use it as a base from which to conduct more expansive analyses of research in particular thematic areas. This is particularly true for discipline-specific research on the student learning experience. It was beyond the scope and time limits of this review to analyse these often diverse and sometimes complex disciplinecentred discourses in detail, although this is clearly an important task. Nonetheless, the review team found that there were limitations to the version of a systematic approach to identifying literature adopted. It proved difficult to identify search terms that captured all the relevant literature, a difficulty compounded by the partial coverage of relevant journals in the major databases. Hand-searching and following recommendations were therefore essential supplementary strategies, but could only be pursued to a limited extent in the time available. It also became clear that the systematic and electronic database-centred approach to searching relevant

The student learning experience in higher education 29

The Higher Education Academy 2008

items made it difficult to identify researchers or methodologies opposed to the dominant conceptual and methodological approaches. Alternative voices, as represented, for instance, by Rowland (2000), Clegg (2004) and Mann (2001), might be hard to find in a systematic literature search of this kind because they deliberately use different concepts and language. This is particularly true in cases such as this where the focus of the review is, to a degree, a politically-driven concept, which, in turn, is guided by one dominant research paradigm. Therefore, the systematic review approach used here might not capture the whole field, or reveal directions in which the field might develop in the future.

The student learning experience in higher education 30

The Higher Education Academy 2008

Findings The structure of this findings section is derived from the analytical map described above. Reviewing and writing tasks were divided in a way that drew on the expertise of different members of the review group. The team members analysed the nature of the research literature, research results, and implications within each sub-section in turn. In the following discussion of items selected for review, additional literature from the UK and elsewhere is used. Literature within methodological paradigms—inventory and action research—is discussed first. It was decided to discuss these methodologies separately because of the numerical dominance of inventory-based studies in the research area of learning and teaching in higher education, and the usefulness of action-research studies for a detailed discussion of both the potential and the limitations of research in this area. The sub-sections on inventory-based and action research are followed by thematic areas identified from the analytical framework: induction; teaching, curriculum and learning environments; student perceptions of learning; assessment and feedback and quality assurance and enhancement.

The student learning experience in higher education 31

The Higher Education Academy 2008

1. Inventory studies 1.1 Overview of research literature in this section Research on students’ approaches to learning, learning styles and their evaluations of teaching effectiveness employing a variety of inventories and questionnaires form a dominant paradigm in research on student learning in higher education. Price and Richardson (2003) and Coffield et al. (2004) provide reviews of instruments that purport to measure aspects of learning style (as opposed to ‘experience’) and few instruments have been found to have sufficient reliability to be worth citing studies employing them. Richardson (2000) provides a review of the various inventories developed to measure students’ approaches to studying and Ramsden (2003) discusses the possible implications of such research for teaching in higher education. Such studies say little about the student learning experience but are rather about the students themselves and their characteristics, skills or preferences. A strict interpretation of the brief for this review would, therefore, question the relevance of such studies. However, they are included because of their numerical importance in the literature and an implicit belief in many of the articles using this methodology that understanding the student approaches to learning in this way was an important step towards improving (albeit in the future) the quality of the student learning experience. An alternative approach is the use of inventories that aim to elicit students’ evaluations of teaching effectiveness and educational quality. The body of literature adopting this approach has been reviewed (for instance by Marsh (1987) and Marsh (in press)) and is covered in section 7 of the findings. This part of the review examined 50 papers spanning from 1990 to 2006. In these studies, inventories are primarily used as a research tool rather than as an instrument for monitoring the quality of teaching and students’ educational experience (see section 7 of the findings for studies that use inventories to monitor quality). All but six of the 50 papers were UK-based, with one from Holland, one from Eire and four from Australia. Clearly this is only a subset of the available publications employing inventory-based methodologies. In addition, there are more extensive reviews, such as Richardson (2000) plus a range of papers developing and assessing the psychometric properties of the various instruments that have not been included as they are not related directly to the student learning experience. When reviewing these publications one is immediately struck by the sheer variety of inventories and questionnaire instruments being used, in many cases coupled with more qualitative approaches. The following instruments were identified:

The student learning experience in higher education 32

The Higher Education Academy 2008

Instrument Inventory of Learning Styles Giugielmo Self-directed Learning Readiness Scale Rezler & French Learning Preferences Inventory Reflections on Learning Inventory Student Course Experience Questionnaire Approaches to Study Inventory (ASI) Revised Approaches to Study Inventory (RASI) Extended Approaches to Study Inventory Approaches and Study Skills Inventory for Students (ASSIST) Short Approaches to Study Inventory Learning and Studying Questionnaire Experiences of Teaching and Learning Questionnaire Conceptions of Learning through Discussion Questionnaire Approaches to Learning through Discussion Questionnaire Perceptions of Learning through Discussion Questionnaire Experience of Studying Mathematics Inventory Mathematics Study Process Inventory Student Learning Inventory in Economics Course Experience Questionnaire

Number of Studies 2 1 1 1 2 1 2 1 2 1 1 1 1 1 1 1 1 1 9

Quite often the instruments are derived from one another, either to target special populations, particular pedagogies or elicit subject-specific understandings. Several studies include development and refinement of instruments as one of their outcomes. Checking of the consistency of scales within instruments and their meaning (for example whether they are assessing ‘deep’ or ‘surface’ approaches to learning), typically utilising principal components analysis4, is common. Often this is part of the process of assessing the validity of an instrument in a new context (e.g., Byrne and Flood 2003). While such studies may be of methodological value they add little to the understanding of the student learning experience.

1.2 Students approaches to studying Articles reviewed that employ inventories that probe students’ approaches to studying typically begin with a literature survey before setting out how they add to this already voluminous literature. The added value may take one or more of several forms: •

Subject specific studies utilising general instruments, such as the Approaches to Study Inventory or the Approaches and Study Skills Inventory for Students (Duff

4

Principal Components Analysis (PCA) is a statistical technique for simplifying a data set by reducing multidimensional data sets to lower dimensions for analysis.

The student learning experience in higher education 33

The Higher Education Academy 2008

• • • •

2004; Kell and van Deursen 2002; Morris 2001; Maguire et al. 2001; Sutherland 2003a). Subject specific studies using specially adapted instruments (Eley and Meyer 2004; Shanahan and Meyer 2001). Studies linking students’ approaches to studying and outcomes (Trigwell and Prosser 1990; Eley and Meyer 2004). Pedagogic studies assessing the impact of particular teaching strategies, for example reflective discussion and writing (Ellis and Calvo 2006; Norton et al. 2004) or assessment (Williams 1992). More complex modelling studies where the intention is to model the student learning experience in a range of contexts or under conditions of ‘dissonant’ and ‘congruent’ forms of contextualised learning engagement5 (Prosser et al. 2003; Entwistle 2005). Typically these rely on ‘deep’ and ‘surface’ constructs (or their analogues) derived initially from the work of Marton and Saljo (1976). Often the outcomes are descriptive, comparing the learning of students in different contexts, using different approaches, or students with different characteristics, such as gender (Meyer 1995) or course choice (Jarvis and Woodrow 2001). Others model the impact of contextual factors on learning patterns (Vermunt 2005).

Assessing the value and quality of these various studies is problematic given the range of instruments used, their different purposes and the methodology employed. Smaller-scale studies with small sample sizes and focussed on students in one institution or one programme generally offer little: there is almost a feeling of doing something because they could. Often such studies are justified by specific development goals but there is little follow up that can be seen in the literature. Better studies in this genre had a planned longitudinal element in the research design (Kell and van Deursen 2002). Further longitudinal studies, with articles addressing the whole of the research process rather than just the first phase, would add value in the field. Some studies had a definite methodological objective, either as a primary or a secondary focus of the research. Too often, however, what follows is an exploratory principal components analysis that adds little. If the work is to develop improved instruments perhaps newer conceptual and statistical insights need to be explored. The best study in this genre (Meyer 1995) clearly identified a challenging statistical issue, how to make valid comparisons of cross-product matrices in order to make valid comparisons between males and females in their approaches to study.

5

A ‘congruent’ pattern of relationship would be where, for example, students adopt a surface approach to learning and perceive the teaching and learning environment as supporting surface approaches, or students adopt a deep approach and perceive the teaching and learning environment as supporting deep approaches. The latter is shown to consistently lead to higher quality learning outcomes, the former to poorer ones. Alternatively approaches and perceptions can be ‘dissonant’, e.g. students adopt a deep approach while perceiving the teaching and learning environment as supporting surface approaches, or vice versa. Such dissonance tends to lead to very poor learning outcomes.

The student learning experience in higher education 34

The Higher Education Academy 2008

Overall, such studies shed only little light on the student learning experience per se, although they do show consistently that ‘student-focused conceptual change approaches to teaching are associated with deep approaches to learning, and teacher-focused information transfer approaches to teaching are associated with surface approaches to learning’ (Prosser et al. 2003, p. 38). Another group of inventory studies are more informative for the present review.

1.3 Course experience questionnaire Nine papers employing the CEQ (or variants thereof) were reviewed. Much of the impetus for the use of the CEQ stems from its development as a monitoring instrument at a time when accountability and the academic quality of degree programmes is a major concern for higher education institutions (see section 7). The CEQ has been used for over a decade as a performance indicator in Australian higher education, and is now being used increasingly in other countries. However, the CEQ has also fed into curriculum development projects and evaluations of, for example, the introduction of problem-based learning (PBL). The CEQ has also been used to examine differences in experience between students taking different degree programmes, for example at the Open University (Lawless and Richardson, 2004), harking back to the use of the original Course Perception Questionnaire from which the CEQ was developed (Ramsden and Entwistle 1981). In the UK context, there is an important series of studies by John Richardson and his collaborators that examine distance education using the CEQ, including work on students with hearing loss (Lawless and Richardson 2004, Richardson 2005b, Richardson, Long and Woodley 2003, Price and Richardson 2003). Students’ responses to the CEQ in these and other studies exhibit a multidimensional nature similar to that found in North American studies on student evaluations of individual instructor effectiveness. In addition to this multi-dimensional construction of different forms of quality, responses to the CEQ “are dominated by a single overarching construct that can plausibly be construed as perceived academic quality” (Lawless and Richardson 2004, p. 370). This study, along with many others, found a significant positive relationship between a graduate’s final degree classification and their scores on the CEQ. Experience of appropriate assessment, good teaching, clear goals and standards, good materials and good tutoring were all positively associated with increased attainment for these distance learning students. These findings are echoed in studies of campus-based students (e.g. Wilson et al. 1997, Lizzio et al. 2002). Richardson et al. (2003) demonstrated a positive association between the academic engagement of distance learning students and their perceptions of the academic quality of the courses, as measured by scores on the CEQ. However, their detailed analysis suggested that the relationship is far from linear, being influenced by factors such as student gender (women had lower scores on the CEQ factors of appropriate workload and student choice than men), age, prior qualifications, and hearing status, and the academic staff who have responsibility for a course. This leads to a complex

The student learning experience in higher education 35

The Higher Education Academy 2008

model that attempts to capture the relationship between background variables, academic engagement, perceived academic quality and ratings of general satisfaction. This is an approach that warrants encouragement as the implications for teacher development can be addressed: “… the present findings suggest that changing the attitudes and behaviours of tutors in distance education will have little impact upon learning outcomes in their students (and particularly upon students’ general satisfaction with their courses) unless it also leads to appropriate changes in the perceptions of students themselves … improving teaching is likely to be achieved only through improving the experience of students” (Richardson et al., 2003, p.242). The CEQ, often used in conjunction with an approaches-to-study inventory instrument, has also been used to compare the impact of different types of learning environments, for example subject- or problem-based (Lyon and Hendry 2002; Saldo and Richardson 2003). In Lyon and Hendry’s (2002) study, students rated their PBL medical education programme more highly than an earlier group of learners participating in a more traditional medical education programme in the same university. This was attributed to a reduction in content and lecture-based delivery, which encouraged memorisation by rote learning over learning for understanding. However, students on the problem-based learning programme scored lower on the ‘clear goals and standards’ scale, an issue that was being addressed by changes to the curriculum. In addition, this study identified items in the CEQ that were not felt to be appropriate for those following a problem-based learning curriculum, calling the validity of the CEQ for evaluating the experience on these types of programmes into question. Saldo and Richardson’s (2003) statistically sophisticated study also showed systematic differences in the perception of academic environments of different programmes in occupational therapy. Students on the problem-based learning programmes had higher scores on the CEQ scales concerned with appropriate assessment and an emphasis on independence than those following subject-based courses. In addition, those on the problem-based learning programmes produced relatively high ratings on the good teaching and clear goals and standards scales. This finding refutes Lyon and Hendry’s argument that the CEQ was not suitable for evaluating problem-based learning programmes. The final study reviewed (Karagiannopoulou and Christolides 2005) examined the relationship between students’ perceptions of their current learning environment using the CEQ, with their approaches to study and academic outcomes in a Greek university. The findings of this study are comparable to the work of others (Entwistle et al. 1991, Prosser et al. 1996) in revealing a coherent relationship between the adoption of a deep approach to study and those perceptions of the academic environment that encourage a deep approach to studying for successful students (Karagiannopoulou and Christolides 2005, p. 346). The relationship between approaches to studying and outcomes (as measured by grades) was not straightforward, but the authors did find that a student’s perceptions of the current learning environment were a stronger predictor of academic achievement than prior

The student learning experience in higher education 36

The Higher Education Academy 2008

academic ability. The study also pointed to the importance of affect as an influence on students’ cognitive processes and their learning at an early stage of the higher education experience (cf. Lizzio et al. 2002, Robbins and DeNisis 1994).

1.4. Combining the approaches Trigwell and Ashwin’s research (Trigwell 2005, Trigwell and Ashwin 2006, Trigwell and Ashwin 2003) on undergraduate learning at Oxford stands as an exemplar of what can be achieved using inventory-based approaches combined with qualitative research. This attempted to assess the presage, process, product model of learning and estimates values for the strength of association between different elements in the model. The outcomes of this study indicated the strong link between approaches to study and contextual factors: Students who are more likely to be seeking personal meaning and understanding from their studies (adopting more of a deep approach) experience higher levels of good teaching, more appropriate workloads and assessment, higher levels of collegiality, and more encouragement from their colleges to pursue their own academic interests. The students who describe adopting approaches focused more on meeting the assessment requirements through memorising rather than understanding (more of a surface approach) experience lower levels of good teaching, less clear goals, less appropriate workloads and assessment, lower levels of collegiality, lower co-ordination between college and department in their course, and less encouragement from their college to pursue their own academic interests. (Trigwell and Ashwin 2003, p.43). More studies of this sort, which also take into account affect (students’ subjective feelings about and experience of relationships with their tutors, colleagues, their learning environment) as a key factor in the student learning experience, are needed.

1.5 Summary and implications The increasing use of these inventories is clearly linked to the perceived need to monitor academic quality and teaching effectiveness for both curriculum development and quality assurance purposes. There is, therefore, likely to be a strong political commitment to this type of work in the future. The research provides a developing and potentially useful assessment technology. However, there is a lack of vision about how the technology can be used to change or improve the student learning experience. There is little evidence that the findings from such studies are being used in this way. While one can probably argue that perceptions of learning environment demands and academic quality make a difference to the student learning experience and learning outcomes, there is little evidence about what changed perceptions might

The student learning experience in higher education 37

The Higher Education Academy 2008

produce for the student learning experience or about how changes in the teachinglearning environment might change perceptions. More longitudinal and evaluation studies, linked to intervention, are needed to produce such an evidence base. Such research needs to take due account of a students’ disciplinary, institutional and wider socio-cultural setting. In so doing a useful link could be forged to socio-cultural views of learning and ideas about learning conceived of as ways of practising in a subject.6

6

An obvious pitfall to be avoided here is a shift to over reliance on metaphors of community and practice as providing an adequate model of student learning in this paradigm (see Edwards (2005)).

The student learning experience in higher education 38

The Higher Education Academy 2008

2. Action Research This section reviews the materials that, using the term loosely, were classified as examples of action research (column D, Figure 1); this classification is conceptualised in the following sub-section. Most studies can be classified as constructing learning environments (line 4), with the remaining studies located in the areas of approaches to teaching (2), curriculum development and resources (3), student perceptions of learning (5), and assessment and feedback (6). Some studies cannot be located in one line, but branch into one or two other lines (Figure 2).

2.1. Overview of action research literature Of the fourteen items categorised as action research, nine are papers published in academic journals, four are conference contributions, and one is a monograph reporting on eight action research projects at different higher education institutions (Gibbs 1992). For an overview of this type of research, to complement the more detailed but selective discussion here, see Gibbs (2003). Most papers report on small-scale interventions, carried out at one institution, in one programme of study or one module of a programme. The authors are mainly lecturers on the programme in question who are, in the first instance, interested in improving their course; learning more general lessons is an aim for all authors, but sometimes to a limited degree. In some cases, the author(s) work for a unit of the higher education institution concerned with professional development of staff or improving teaching at the institution. In this respect the papers covered in this section overlap with the papers in the section on teaching, curriculum and learning environments (section 4 below). Indeed in the process of analysis, some papers were ‘relocated’ on the analytical map, despite claiming (in some cases very explicitly) to constitute an example of action research. The classification of studies as action research was guided by the notion that this kind of research aims primarily to improve practice (rather than develop new knowledge), and thereby focuses on particular contexts of practice in higher education. However, certain conditions need to be fulfilled in order to warrant the label of action research. The main criterion for classifying papers as reporting on action research is the degree to which the research follows a systematic and explicit structure. Typical for action research are cycles of developing an intervention, implementing it, assessing it, redesigning the intervention and implementing the redesigned intervention (Hult and Lennung 1980, Elliot 1991). Pring (2000, p.135) regards the act of reflection as central to this kind of cycle, and describes the process of action research as ”[... the act of] constant putting into practice, reflecting on that practice, refining of beliefs and values in the light of that reflection, subjecting the embodied ideas to criticism […]”.

The student learning experience in higher education 39

The Higher Education Academy 2008

Gummeson (2000, p.208) takes this notion of reflection further, seeing it as a starting point for generating ”[…] a specific (local) theory which is then tested and modified through action”. (See also Day et al. 2002). In particular, action research can be a very effective tool for illuminating and understanding changed processes in social contexts (see Hult and Lennung 1980). In order to fulfil this function, action researchers need to be aware of the singular but also the universal dimensions of the context in which they conduct their investigations. If these criteria were applied rigorously, the number of papers in this section would have become even smaller than it is. The demarcation with the purely descriptive studies examined in the following sections must be regarded as rather fluid. The research process is not always made explicit in the papers, but can be traced in the way the different stages of the research are analysed. The number of students involved in the studies was generally small, in most cases between 7 and 15 students. Notable exceptions include the studies conducted by Kinchin et al. (2005) and by Cowan and Creme (2005), each of which involved well over 150 students. Sometimes the more detailed investigation of small numbers of students is supplemented by feedback on new initiatives or interventions given by bigger groups of students (Clouder 2005). In some cases, authors fail to state the number of participants, but a small sample size can be supposed from the description of the research. In all studies, students were the primary focus of the investigation. Aspects of timing, duration and frequency are particularly important for action research projects as the aim is to develop improvements in practice gradually over time (Stenhouse 1980). Nevertheless, most studies report on investigations into interventions in teaching and learning within one term or one study year, mostly carried out at one institution. Longer-term investigations are the exception (see, for instance, Clouder 2005). In two cases the research investigated the implementation of an intervention in one study year, the refinement of the intervention and its reimplementation in the next study year. This investigation of two cycles of the same intervention is a typical design in action research. Depending on the context, the redesigned intervention (cycle 2) is implemented in the same group of students or in the consecutive student cohort. For example, Kinchin et al. (2005) looked at implementing collaborative concept mapping in two consecutive years of first-year students. Cowan and Creme (2005) built their investigation of peer assessment on an earlier project. The programmes investigated in the paper cover a wide variety of subjects: occupational therapy and health care, business studies (two studies), microbiology, education (teacher training in maths and science), early childhood studies, pastoral counselling and science and technology studies. Cowan and Creme’s (2005) study covers programmes in ten disciplines at the same university.

The student learning experience in higher education 40

The Higher Education Academy 2008

In a number of studies, theoretical or conceptual underpinnings are not explicitly discussed. In most cases, underlying ideas that are broadly constructivist (that the learner is an active collaborator in the learning process and in the construction of knowledge (Cullen et al. 2002, p.54)) can be derived from the assumptions made and the improvements of teaching and learning aimed at. Common assumptions include a positive view of student activity in learning processes, the idea that learning can be promoted by increased communication among students, and the conviction that students’ understanding can be improved by making learning processes explicit. Kinchin et al. (2005) is a good example of how concepts from constructivism can be used to theorise the role of prior knowledge and experience and students’ mental reconstruction of knowledge as presented by teachers in the process of concept mapping, without explicitly referring to constructivist theories of learning. In cases where a theorised starting point for the investigation is sought, socio-cultural theory in general and Vygotsky’s concept of zone of proximal development (the gap between a learners’ current and potential level of development related to his or her capacity for independent problem solving (Vygotsky 1978, p. 86)) in particular, are discussed by several authors. Two studies explicitly draw on the concepts of deep and surface learning. In these cases, the depth of reception of a theoretical school or tradition varies substantially; only in two cases is a body of theory systematically used to design a study and to analyse the data generated. The studies reviewed used a range of methods of data gathering from a variety of sources. Most commonly, interviews (often unstructured or semi-structured) with students were conducted to gauge their perception of an intervention. Other sources of evidence include online communication forums, student work in various forms (for instance, concept maps analysed in Kinchin et al. 2005) collaborative discussion sessions with students (Waite and Davis 2006; Bulpitt and Martin 2005), questionnaires, and student learning journals. Cowan and Creme (2005) used selfassessment exercises of pieces of writing and students’ responses to exercises as the main source of data. In around half of the studies, combinations of different sources and methods are used. In some studies, the sources of data and the methods of data collection are not described. In these cases rather vague references are made, sometimes to ‘students’ responses’ to an intervention. Very few contributions draw explicitly on the methodological literature to discuss the potential, and pitfalls, of action research methods, and the role of the researcher in action research. Exceptions to this are Rich and Brown (2006) and Cowan and Creme (2005). This failure to refer to the methodological literature can result in sketchy frameworks for analysing the data gathered (see below).

The student learning experience in higher education 41

The Higher Education Academy 2008

2.2 Action research findings Some of the important areas of investigation covered in the action research studies reviewed here are outlined. Three broad areas, derived from the lines of the analytical map presented in section on the methodological approach, can be identified: 1. Studies into approaches to teaching and into curriculum development and resources (lines 2 and 3); 2. Studies into constructing learning environments (line 4); 3. Studies into other areas (mainly line 5 and 6). A selection of indicative studies is outlined in each section.

2.2.1 Approaches to teaching, curriculum development and resources Rich and Brown’s (2006) study showed some of the ways in which curricular aims and strategies for achieving them can be investigated using action research-type approaches. The study looked at the development of information literacy among business students. Building on the existing literature, Rich and Brown identified different types of literacy relevant for this group of students: information literacy, digital literacy, technological literacy. The study reported on the introduction of a collaborative approach that requires students to work together in developing ways of digesting information. This was contrasted with a purely transmission approach applied previously in the programme. At first, students were motivated to develop their own approaches towards working together and their own information-handling techniques. However, after a while, student motivation decreased and the researchers redesigned the intervention to allow for more informal question and answer sessions between individual students and lecturers, to an extent reversing the decision to put more emphasis on collaborative work between students. The question of the degree to which and the ways in which collaborative work between students can promote certain types of learning and can achieve certain kinds of curricular outcomes is also central to the study conducted by Kinchin et al. (2005). This study demonstrated clearly how the introduction of new teaching approaches can be linked to wider curricular aims. Drawing on a general discussion of teaching styles in the sciences and of broadly constructivist ideas on teaching and learning, the authors described the introduction of collaborative concept mapping as a teaching approach for students in microbiology. The paper examined some examples of concept maps produced by students, identifying two different ways of working with concept maps and analysing the potential and pitfalls of the two approaches. The findings indicated that introducing concept maps had a generally positive outcome on the understanding and the learning experience of most students, and discussed other contexts in which concept maps might be used effectively.

The student learning experience in higher education 42

The Higher Education Academy 2008

2.2.2 Constructing learning environments Clouder (2005) investigated the role of ‘threshold concepts’ in the professional development of health care professionals. Drawing more clearly than other studies on relevant theoretical literature (Meyer and Land 2003). Clouder employed the notion of ‘threshold concepts’: those concepts within a specific discipline that lead to new and previously inaccessible ways of thinking about important subject themes, in this case, the nature of caring for someone. Virtual learning environments, such as online forums, were introduced as a way of promoting exchange mainly among students and also between students and lecturers, with the aim of enabling students to reach the threshold of a meaningful caring experience. The research generated rich descriptions of students’ practical experience of caring, based around quotes from students. Different concepts of caring are described, drawing on students’ experience and the relevant literature. Clouder found that interaction between students sharing first-hand experience led to a deeper understanding of what caring means than any classroom-based teaching arrangement. Transformational experiences are needed for students to learn about the affective dimension of caring, and asynchronous virtual learning environments can help students to create a wider framework for reflecting their experiences in the light of the experiences of other students. The study concluded that the virtual learning environment can help students to get to grips with sometimes disturbing experiences, and had the potential to maximise valuable dialogue among students to make studentlecturer communication more effective. A further illustrative example, in which student collaboration is used to achieve certain learning objectives, can be found in Waite and Davis (2006). The researchers brought together Early Childhood Studies students in discussion groups to develop their critical thinking capabilities. This small-scale study built on the discourse on critical thinking as an aim of higher education. Using ideas derived from social-cultural theory, the researchers developed a framework for conducting discussion in which students were asked to reflect ideas in written and oral form. The study analysed group discussions and comments on written work and a questionnaire in which students were asked to reflect on their experiences. Waite and Davis found that well-planned and carefully-structured collaborative work can foster critical thinking among students. This, however, requires a trust-based lecturer-student relationship that is often not developed in conventional teaching-learning arrangements.

2.2.3 Other areas Cowan and Creme (2005) is the only example of action research aimed at assessment of learning in higher education selected for review here. It looked at the development of ‘academic literacy’ (in the sense of the relationship between students’ writing and meaning-making (Lea and Street 1998)) through self- and peerassessment of 320 students in ten disciplines within the area of culture and community studies at one higher education institution. Drawing on the literature on The student learning experience in higher education 43

The Higher Education Academy 2008

academic literacy, the study regarded writing as a key skill and a key tool for developing understanding. At the same time, the study attempted to introduce a structure for written self- and peer-assessment, that consisted of a number of exercises students were asked to carry out individually or with other students. These exercises also formed the main data analysed in the study. The authors concluded that peer-assessment can be a productive way for enhancing students’ writing skills, developing collaborative working and for thinking critically about assessment issues, although not a way of saving tutors’ time or substituting for tutors’ assessment of student work. The authors suggested, however, that the exercises developed and tested in the project could be adapted to other contexts and used in other subject areas and institutions.

2.2.4 Overarching finding Maximising dialogue between teachers and students, and also among students, is a common theme in the action-research-based studies reviewed. Mechanisms used to facilitate more collaborative approaches to teaching and learning include ICT-based discussion forums, concept maps, peer-assessment exercises, and group discussions. Overall, the findings of these studies suggest that collaborative work has broadly positive effects on student learning in general and on development of critical thinking capabilities and academic literacy in particular. However, the evidence base for these findings is not in all cases convincing.

2.3 Methodological issues A key reason why the findings of some of the studies discussed in this section are not entirely convincing is lack of rigour in the data analysis. A frequent weakness in these studies is the reporting of student responses to intervention, without describing how these responses were initiated, recorded and analysed. This leads, in some cases, to the presentation of long quotes from interviews or other exchanges between teachers and students with no discussion about how these are selected, and without their being properly embedded into the overall argument presented. The ‘impressionistic’ analysis of data often leads to reports of a fragmented nature. This makes it difficult for the reader to reconstruct the case presented and draw meaningful conclusions for practice elsewhere. Waite and Davis’ (2006) study stands out for its clear rationale for the analytical criteria employed (namely the theoretical framework of critical thinking dispositions developed by Norris and Ennis (1989)). On the other hand, some studies only refer very loosely to positive student response to justify the overall perception of a successful intervention. According to Carr and Kemmis (1986), action research can potentially link scholarly inquiry with change in educational processes. Due to the methodological and conceptual shortcomings outlined in this section, this potential of action research has not been realised for investigating and improving student experience in higher The student learning experience in higher education 44

The Higher Education Academy 2008

education. Most of the studies reviewed here fail to present a systematic structure for continuously reviewing and refining interventions aimed at improving students’ experience of their education. Also, ethical considerations regarding the role of the researcher as implementer and evaluator of interventions (Hult and Lennung 1980) are rarely discussed. Measuring the success of an intervention is an important part of the reflection process, and can lead to redesigning the intervention and eventually to improving practice. However, in some studies reviewed, this measurement failed to compare like with like. For instance, the ‘quality’ of a new module of instruction cannot be meaningfully measured by comparing the marks awarded in this module with marks in other modules of the programme (De Vita 2004). It is also not convincing to evaluate students’ opinions by asking them two vague Likert-scale questions regarding their learning style, to give the mean and standard deviation of the aggregated response scale and to draw far-reaching conclusions for redesigning the curriculum from this information (Rich and Brown 2006). These two examples drawn from the papers reviewed demonstrate a tendency towards attempts to make small-scale research more ‘valid’ and ‘objective’ by generating meaningless quantitative data. Despite the need for time to implement, evaluate, redesign and re-implement an intervention in action research, most projects reviewed were short-term in nature. Work termed as action research is sometimes nothing more than a singular trial of a new idea, without undertaking iterative phases of implementation and reflection during which action (an intervention, a new teaching approach, etc.) can be planned and observed. This shortcoming is probably related to funding constraints for this kind of research (where funding is noted it is often institutionally-based). It can be assumed that these projects do not attract substantial external research funding, which limits opportunities for the capacity building over time required for longer-term action research projects.

The student learning experience in higher education 45

The Higher Education Academy 2008

3. Induction and transition 3.1 Overview of research literature on induction and transition The increasingly diverse nature of student intakes to higher education, concerns about retention and withdrawal, and awareness of the changing context of higher education, particularly in relation to fees and funding, has led to a focus in the literature on transition from secondary school (and to a lesser extent further education) to higher education, and on the period and process of induction to higher education. Given the evidence that the majority of students who withdraw do so in their first year, a number of higher education institutions have attempted to enhance their induction processes with the aim of maximising ‘attachment’ to the higher education institution and minimising alienation which may lead to withdrawal. Bearing in mind the work undertaken for Harvey et al.’s (2006) recent review about the first year-experience, the discussion in this section is relatively brief. While there is a large and well-funded US research literature on the ‘first year experience’ (some of this is reviewed by Harvey et al. 2006), the UK literature is relatively sparse: 33 papers were reviewed for this section. Most of the research is in the descriptive category and based on questionnaire surveys administered to first-year undergraduates early in their first year. The studies are often unfunded, or funded by the university itself, and are usually carried out by a member of staff in the higher education institution who has some responsibility for the induction. The studies are small-scale, exploratory and based in one institution (the institution where the author is based). A minority have some evaluation aspect, though this is not systematic. The studies are mainly under-theorised and consist of descriptions of practice. The studies reported highlight a growing awareness of the importance of induction and of the development of a sense of identity and of belonging to a university. The research reported may be divided into four foci. 1. A focus on student expectations and preparedness for higher education by administering questionnaires early in the first year of the undergraduate period and asking students to reflect on their preparedness for higher education. 2. A focus on the process of induction and descriptions of innovative or specific induction approaches which have attempted to go beyond the traditional information giving induction of many universities. 3. A focus on what has become perceived as a ‘gap’ between the competences required at A-level (or at secondary level) and those required by students at tertiary level. These studies are often subject specific and examine the competencies required, for example for university mathematics, as compared with those required for A-level competence.

The student learning experience in higher education 46

The Higher Education Academy 2008

4. A focus on characteristics of ‘non-traditional’ students, with the aim of identifying characteristics that may be addressed to improve retention. An indicative selection of studies under these four themes is reviewed below.

3.2 Student expectations and preparedness for university study Research suggests that the majority of students manage the transition from school to higher education successfully, though there is a significant majority for whom expectations of university did not match reality and who had inaccurate prior perceptions of higher education. Lowe and Cook’s (2003) questionnaire survey of students’ expectations of social and academic aspects of university pre-entry and at the end of their first term in higher education is typical of these studies. A small pilot study by Cook and Leckey (1999) surveying science undergraduates through questionnaires at induction, followed by a second questionnaire at the start of the second semester showed that many students arrive at higher education with unrealistically low expectations about the nature and amount of work. This finding was echoed in a study of first-year accounting students by Byrne and Flood (2005), which suggested that many students had low expectations of the amount of work required of them. Such studies emphasise the need for staff awareness of the qualities, attitudes and skills that students bring with them to university.

3.3 The process of induction as preparation for higher education A number of articles describe innovative induction programmes. Most of the articles emphasise the need to go beyond ‘information transmission’ where students are in passive mode, and to involve students actively and to begin to develop generic or study skills. In light of unacceptably high withdrawal rates from engineering courses at one university, a series of studies reported by Middleton and colleagues (Edward and Middleton1997, Edward 2001, Edward and Middleton 2002, Edward 2003), described a one week activity-based, problem-solving induction for engineering students (named “The Challenge”). This is not evaluated systematically although student feedback appears to show that the course was enjoyable though it did not lead to skills gains. Laing et al. (2005) described an on-line Spiral Learning induction programme (using a series of web-based interactive learning activities,) which attempted to bring new students’ expectations into line with the reality at higher education, and aimed to provide an indication of students ‘at risk’ of not becoming engaged with academic life. Approaches to induction for distance learners are described by Forrester and Parkinson (2004) and Forrester et al. (2005). These studies offer recommendations for practice in this area relating to the provision of timely, relevant and helpful information, helping students to find out who to contact in case of queries and difficulties, and helping students feel part of the institutional community. Another

The student learning experience in higher education 47

The Higher Education Academy 2008

activity-based induction process is “Exploring London” described by Gaskin and Hall (2002). Related to these studies is an initiative reported by May et al. (2005) that attempted to address withdrawal through learning support and the organisation of a ‘Pre-Entry Skills Weekend’ targeted mainly at non-traditional students. Although there was little evaluation, the authors claimed to be contributing to an emerging picture of what influences retention. Focussing on the transition from further education to higher education, Knox (2005) described a programme ‘Next Steps’ that presents a generic curriculum to help students make the transition from further education to higher education. Although there is no systematic evaluation, the author suggested that the programme had a beneficial effect on student progression, retention and performance. Again, this is a small-scale and local study. A limitation of these studies is the lack of systematic evaluation, of goals such as skills acquisition, or of follow-up outcomes such as degree of settling into higher education or minimising withdrawal. They tend to be local, small-scale initiatives that are reported descriptively.

3.4 Competences and understanding required at university Changing patterns of recruitment to higher education and widening participation have led to an awareness of the need for higher education to develop a detailed understanding of the prior school experiences of school students and the skills and understanding that they bring with them. Concern at the poor achievement of first-year mathematics undergraduates, and the implications of changing patterns of recruitment, has led to several studies that have described attempts to identify the skills developed at A-level and those required at first-year undergraduate level, for example Nardi (1997), Hoyles, Newman and Noss (2001), Cox (2000) and Armstrong and Croft (1999). These studies are largely small-scale and descriptive. Cox described a ‘Transition Module’ given to mathematics and engineering students designed to smooth the transition to university; the module appears to include tests of learning competencies in a range of areas, which are then used to gain a measure of ‘probable preparedness’ and to inform more flexible teaching, learning and assessment strategies. Similarly, Dalton (2001) reported a survey of first-year geography undergraduates about their experiences of field studies within A-levels, which revealed a wide range of experience, demonstrating the need for higher education institutions to be aware of students’ prior experiences and competences. Smith (2004b) investigated how well students of English felt prepared for the transition for school to higher education, while Booth (1997, 2001) focussed on history. These studies tend to be very small scale, descriptive and exploratory, though they all identify the importance of bridging the

The student learning experience in higher education 48

The Higher Education Academy 2008

transition from school to higher education and of being explicit about the differences in skills required and differences in approaches to learning.

3.5 Characteristics of ‘non-traditional’ students Within the context of the widening participation agenda, a number of studies have sought to compare different groups of students. For example, Murphy (2000) looked at characteristics of mature students accepted onto undergraduate courses compared with those not accepted in one university, and found few differences except that those accepted tended to have a higher level of educational attainment. Baxter and Hatt (2000) compared the characteristics of first-year undergraduates who came through clearing with early choice students, and found few differences. Harrison (2006) carried out a telephone survey of undergraduates who withdrew in their first year of study, and attempted to develop a model of ‘persistence’, which would help to understand how to enable students to become attached and not withdraw. Blicharski (1999) compared a group of ‘traditional’ with a group of ‘non-traditional’ students, the latter had attended a ten-week Access Summer School to beneficial effect.

3.6. Summary and implications The studies reviewed here are all small scale and mainly exploratory and descriptive. What is needed are larger scale studies with more systematic evaluation, of immediate and of longer term outcomes. It is evident, from the papers reviewed, that considerable concern about student withdrawal, which has grown with the greater diversity of the undergraduate population, has led to a number of efforts to develop innovative induction programmes, and to consider more carefully the implications of the transition from school or further education to higher education.

The student learning experience in higher education 49

The Higher Education Academy 2008

4. Teaching, curriculum and learning environments The student learning experience has also been researched through studies examining approaches to teaching, curriculum and resource development, and the construction of learning environments. There is considerable overlap between these areas in the research, so they will be addressed in a single section. A total of 72 studies were reviewed in these areas and an indicative selection is reported here.

4.1 Overview of research literature in this section There is a significant body of US and European literature, including Donald Bligh’s influential books on lectures and discussion (Bligh 1971, 2000), and the reviews of European and US research by Dochy and colleagues (Dochy et al. 1999, 2003). By contrast, most of the UK literature examined for this section reports relatively smallscale unfunded studies of innovations in one module, often conducted and written up by the module leader. These studies frequently reflect the impact of the policy imperatives discussed above. For instance, the employability agenda is an important driver of innovations aiming at developing the autonomous learner with transferable skills (including taking responsibility for learning, working with others, and communication skills) throughout the period (Healey 1992). Widening participation agendas, and the perceived need to develop a learning experience appropriate for a more diverse student population, have also driven reform. Group work is a good example of a learning arrangement that is sometimes associated with these imperatives, often in a rather loose way. For instance, Healey et al. (1996) saw group work as a means of both developing transferable skills and coping with lower staff-to-student ratios. More recently, affective aspects of group work are linked with widening participation, and invoked as an aid to the retention of students from non-traditional backgrounds (Cartney and Rouse 2006). The piecemeal nature of these studies has meant that there is little discussion of how an innovation in one module connects with or influences students’ learning experiences in other parts of their degree programme. This is an important shortcoming: as Gibbs (1992) suggested, innovations may be ‘locally successful’ but have a limited general impact on students’ learning if the remainder of the course is unchanged or makes quite different demands. The local focus of these studies, together with their small scale, has also meant that they do not claim external validity or often even wider resonance for their findings. Many of these studies, including some that label themselves action-research, can therefore be categorised as reflections on practice. These are not conducive to systematic synthesis, although there were common themes and findings. Most of the

The student learning experience in higher education 50

The Higher Education Academy 2008

developments discussed fit within a broadly constructivist framework, though this is sometimes not discussed explicitly. The underlying assumption is that students should take an active role in constructing the knowledge base to be covered. This research thus tends to favour approaches to teaching and developing curricula and resources that encourage active and experiential learning (often contrasted with traditional didactic approaches in which students are passive recipients of knowledge transmitted by the teacher). It also favours the development of ‘democratic’ learning environments where student experience can be built upon. Some studies seem to assume that such approaches automatically benefit student learning (see also Cullen et al. 2002 for similar findings). Others are more nuanced and note resistance on the part of colleagues in the higher education institution, difficulties in relating innovations to other, more traditional approaches and modules, and problems with students’ preparation and expectations. The focus on changes in individual cognitions evident through so much of the literature is also dominant in these studies. Socio-cultural analyses that examine how environments might be structured to support learning are evident in some work, however, particularly in the area of e-learning (Crook and Barrowcliff 2001; Ellis and Calvo 2006; Hall 2003; Hampel 2006). As the analytical map indicates, the vast majority of studies examined in this section are descriptive or evaluations, with three reviews and one experimental study (one other study described itself as an experiment but the reviewers felt it could not be categorised as such). Many studies relied on self-report evidence from students, either from standard course evaluations or specially designed ones. A number incorporated both self-report evidence and evidence from tests. These indicate that self-reported affective and motivational benefits might not translate into gains in knowledge or understanding as measured in tests, and therefore, suggest the importance of including both self-report and external evidence. Disciplinary coverage was varied but there was a concentration of studies in the area of medicine or healthcare, and more related to natural and applied scientific disciplines than to social science, and a smaller number still addressed arts and humanities subjects. A paper arising from the first large ESRC-funded project in this area, the TLRP Enhancing Teaching and Learning in Undergraduate Courses, was the only one reviewed that examined relationships between subject knowledge and appropriate higher education pedagogies. This study drew on the concept of ways of thinking and practising in the subject (McCune and Hounsell 2004). Three studies were cross-disciplinary (Gibbs 1992, Sander et al. 2000, Sharpe and Benfield 2005), and two related to learning outside subject disciplines (study support and personal development planning) (Harker and Koutsantoni 2005; Monks et al. 2006). Research related to e-learning is prominent in this literature (22 studies were identified) and most of the studies that acknowledge any form of funding were in this area. Two main themes were identified within this area of the literature: 1. Strategies to promote learning; 2. Constructing learning environments.

The student learning experience in higher education 51

The Higher Education Academy 2008

The remainder of this section will be structured around these two areas.

4.2 Strategies to promote learning A number of teaching strategies, generally drawing on constructivist conceptualisations of learning, are examined in the literature and an indicative selection are discussed here. While these often, but not exclusively, draw on single cohort data, they are among the strongest studies on teaching and curriculum development that were reviewed. It is important to add a caveat at this stage: studies have been grouped under common headings such as group work, project work, and peer learning, but the reviewers are aware that these approaches are defined differently in the studies reviewed.

4.2.1 Group work Evaluations of co-operative group work have focused more on the affective than the cognitive outcomes and have, therefore, drawn more on student self-reports than measures of cognitive gain. These studies suggest that benefits to students include: emotional engagement and a sense of belonging (Cartney and Rouse 2006); engaging actively rather than passively in learning and thereby promoting student autonomy and responsibility for learning (Garvin and Butcher 1995; Bourner et al. 2001; Clarke and Lane 2005), and enhancing student motivation. Interestingly, very few studies draw on the work in schools-based education on how groups can be structured and managed to enhance learning. Instead, the supporting literature tends to be drawn from either therapeutic literature or from module evaluations within specific subject areas. Sweet (2006) is useful as an example of group work in a clinical setting. He examined the use of a group reflection method recommended by Cowan (1998) in undergraduate dental education. A total of 114 undergraduate dental students, in years three and four of a five-year course, were placed in groups of 12 to 16 and shared reflections on what impacted on their learning during their clinical practice. Students reflected on the new clinical skills they had developed, and commented on their experiences of working with their peers in groups. These opportunities for group dialogue led to identification of common experiences that challenged long-standing institutional practices and assumptions: in this case, students identified the need for more support staff during their clinical practice. Clarke and Lane (2005) evaluated group work in a different setting and with a much smaller sample, examining tutorials designed to promote critical thinking skills in a core second-year module for education studies students. Fifteen students volunteered to take part in these tutorials and subsequent focus-group discussions. The researchers found that Level 2 results were better for students who took part in the seminar but made no causal claim about the seminar leading to better grades. In the focus group discussions, students identified benefits in small group working including The student learning experience in higher education 52

The Higher Education Academy 2008

taking responsibility for learning, discussion as a means of clarifying ideas and improving understanding. On the other hand, they pointed to the need for tutor-led information, particularly early in a module. Two related studies report on group projects. Garvin and Butcher (1995) evaluated a two-week group-work project for first-year bioscience students at Queen’s University, Belfast over two years. According to questionnaire responses students felt motivated, challenged, achieved high levels of understanding of the areas covered, and learned important teamwork skills. Students also reported a sense of ownership of the work, and, in a project with elements of peer and self-assessment, wanting not to let the team down. Bourner et al. (2001) evaluated a group project developed for first-year accounting & finance students at Brighton University, developing the questionnaire used by Garvin and Butcher. Teams of students spent two terms working in external organisations away from the higher education institution. Again, students reported positive outcomes of motivation and learning important skills of teamwork and time management but also reported frustration with unmotivated people in the group who did not pull their weight. (Unequal contributions from participants are also identified as a problem in studies of group assessment (Knight 2004; McDowell and Mowl 1995). Bourner et al. (2001) noted that the length of the project (two months) might have made it more difficult to sustain momentum and motivation than was the case for the two-week project evaluated by Garvin and Butcher.

4.2.2 Learning from and with peers Peer tutoring has moved from being seen as a way of coping with more students with less resource to being recognised as an opportunity for reciprocal learning among pairs of students. It is, for example, now common within medical education, as exemplified in the literature by Nestel and Kidd’s (2005) evaluation of workshops for third-year students on patient-centred interviewing. Twenty-one third-year students were involved as tutors, with their third-year peers in the role of tutee. The evaluation strategy involved students’ self-rating of their skills, comments and ratings from simulated patients, and an analysis of pre- and post-test scores. Nestel and Kidd found that student tutors reported that they gained facilitation skills on the project, and an awareness of communication issues. However, their test scores did not indicate an improvement in interviewing skills compared with the control group who had not participated in the project. There were also clear expectations of reciprocal learning in Topping and Watson’s (1996) experimental study of same-year, dyadic, fixed-role tutoring. Objective and subjective measures, revealed both cognitive and affective benefits for students. Pairs for problem-based peer tutoring were allocated on the basis of a test of mathematical competence, ensuring approximately the same score differential within each pair. Topping and Watson found stronger indications of knowledge gain than did Nestel and Kidd: for the 45 students involved in the experiment degree examination results in calculus were significantly better for the experimental group than for the previous year

The student learning experience in higher education 53

The Higher Education Academy 2008

group. A student feedback questionnaire elicited relatively low reporting of cognitive effects but higher levels of reporting of social effects (integration in course group, improved relationships with peer group) and transferable skills, particularly collaborative, oral communication, and problem-solving skills.

4.2.3 Inquiry-based learning and project work There is a considerable literature on problem-based learning, particularly in medical and health education (Davis and Harden 1999, Newman 2003). There is also a growing literature on problem-based learning related to other fields: see Gibbs’ (1992) discussion of problem-based learning innovations in oceanography and automotive engineer design courses, Savin-Baden’s work on implementing problem-based learning curricula (Savin-Baden 2000; Savin-Baden and Wilkie 2004), and Dochy et al.’s (2003) meta-analysis of European and US studies across a range of subjects. Student satisfaction is prominent among the benefits identified in studies included in Newman’s (2003) pilot systematic review of problem-based learning but knowledge gains are less clear (see Ibrahim et al. 2006 and Dochy et al. 2003 for similar findings). Problem-based learning is also seen as resource intensive. Roberts et al. (2005) examined the student experience of a larger-scale version of problem-based learning offered through a curriculum ‘spine of problem, case and patient-based integrated learning activities’, which were less resource heavy. The integrated learning activities were supported by a web-based curriculum management system. Students who experienced the larger-scale version were compared with those randomly assigned to smaller problem-based learning groups. There were no significant differences in learning outcomes between the two experiences but students did prefer the smaller groups. The review team also looked more broadly at inquiry-based and project learning. Unsurprisingly, some subjects lend themselves more readily to complex project work. Healey’s (2005) review of inquiry-based learning for geography undergraduates is typical of a view that ‘learning by doing’ is an effective way for students to develop the independent learning and research skills needed in the subject, and promotes ‘deep’ learning. Another study in geography, this time of a community-based project (Shah and Treby 2006), echoes these findings and stresses the employability benefits for students. Orsmond et al. (2004), however, offered a more critical evaluation of project work (practical laboratory or field work) for final-year students on a biological sciences course at Staffordshire University, perhaps suggesting that the purposes of project work need to be made explicit to both students and staff. A questionnaire was sent to 39 students and 13 tutors who supervised projects. They found that staff and students had different perceptions of the skills developed in the projects. For example, students regarded presentation as a skill developed and assessed through project work but this

The student learning experience in higher education 54

The Higher Education Academy 2008

skill was not mentioned by tutors. Orsmond et al. also found a lack of association between those skills valued by students for their personal development as biologists and those they identified as important in the project assessment.

4.2.4 Learning in placements outside the university Other research discusses learning activities outside the university. Smith et al. (2004) and Gibbs (1992) addressed work-based experience. Smith et al. (2004) evaluated the learning experiences of sociology and social policy students who took optional work-based experience modules at Level 2 and Level 3 in 2002–03 using a casestudy approach of detailed qualitative interviews with nine students and data from students’ reflective assignments. Students arranged their work experience, identified their own research focus, decided how to structure their work-based experience report (in which they were required to reflect upon their experience, and apply relevant theories and concepts), and managed their time and learning. While students found the module engaging and felt a sense of ownership of their project, they struggled with learning new skills, the lack of structure in these modules, and the difference between researching their workplace experience posed to their previous experiences of research and studying. Students also reported the importance of tutor support to cope intellectually and emotionally. Opportunities for peer support, however, were not taken up in this situation where the students were dispersed. The authors concluded that, with appropriate support, many students respond well to uncertainty and challenge but that this mode of learning might not be suitable for all. One of the innovations reported by Gibbs (1992) was a work placement of 24 weeks at the end of the second year of the BA hospitality management course at Napier Polytechnic. Students were required to prepare a learning plan, and then negotiate a work-based learning contract with their assessor and placement employer that they worked towards during their placement. Students reported that they gained confidence, independence, negotiation skills from this process. However, they felt they did not use these abilities once back in the college setting with its prescribed curriculum and heavy coursework and examination demands (a finding corroborated by measures of the students’ quality of learning using the Approaches to Study Inventory). Gifford et al. (2005) offered a useful discussion of a final-year sociology module involving placements at local schools. This module aimed to provide students with an experience of local community involvement, and (echoing the modules described by Smith et al. 2004) to provide insights into the applicability of social scientific knowledge outside the seminar context. The evaluation of the experiences of the ten students who took this module in 2003–04 revealed that, although students were initially nervous about going into schools, all commented afterwards on how much they had enjoyed the module and required creativity and emotional commitment not encouraged elsewhere in the undergraduate curriculum. The authors concluded that students were able to experiment with concepts and ideas about the role of citizenship

The student learning experience in higher education 55

The Higher Education Academy 2008

education in a democratic society that they had previously considered in the abstract in an experiential manner.

4.2.5 Comparing different approaches to teaching and learning A group of studies attempted to compare the effectiveness of different approaches to learning and their impact on the student experience. Ibrahim et al. (2006) is an exemplar of the rigorous evaluation methods common in the medical education literature but less so in other curriculum areas. These authors compared the learning outcomes of 129 fourth-year medical students randomly assigned onto structured or student-centred teaching for their two-week child health outpatient module. Pre- and post-course tests revealed that students with similar baseline knowledge showed significantly greater knowledge gain with the structured approach than with the student-centred, problem-based, approach. Nevertheless, as the authors suggested, variables that could underlie the differences in knowledge gain (including actual clinics attended, time in clinic and student to doctor ratio) needed to be explored further. Also, there was no attempt in this study to elicit student views (which in other research had revealed affective and motivational benefits from problem-based curricula). Harker and Koutsantoni (2005) compare blended learning and distance learning modes for a nine–week, non-credit-bearing, English for Academic Purposes programme for students from ethnic minority backgrounds at the University of Luton. Twenty-three students were assigned to the blended-learning mode and 20 to distance learning. While achievement and satisfaction levels for the course were similar for both modes, retention differed, with 50% of students from the distancelearning group not completing the module, compared with 3% from the blendedlearning group. The authors suggest that interactivity and communication between staff and students was key to retention and was easier to achieve in the blended than in the distance mode. Sharpe and Benfield’s (2005) review of research on blended learning similarly identified opportunities for communication and interaction as one of the key benefits of this approach.

4.3 Constructing learning environments Studies in this area centre on ICT and web-mediated learning and fall into two broad categories: those which focus on developing the technology to support student learning and those that see these new resources as an opportunity to rethink how students’ learning experiences can be enhanced.

4.3.1 Technology-focused studies These studies have necessarily reflected the increased complexity of the resources available. Studies discuss how resources can be tailored to create flexible and broadbased support to students. Typical is Brooksbank et al. (1998) on the introduction of The student learning experience in higher education 56

The Higher Education Academy 2008

the WinEcon Computer Assisted Learning (CAL) package into a first year microeconomics module to help deal with a rapid increase in student numbers. The database was 200 questionnaires containing reactions to WinEcon from students who followed the course. The authors concluded that the CAL package needed customising to the course, that there were no gains to be made in staff time, but that the demands of the package may help students to become more active seekers of knowledge. However, the evaluation focused on the WinEcon package rather than students, so it is difficult to gauge how it contributed to students’ development. A more recent study in this category, Boyle et al. (2005) also arose from a problem of student numbers: this time the high proportion of withdrawals from computerprogramming courses in the first year of study. The solution was the development of a Virtual Learning Environment (VLE) that aimed at engaging students and making the subject matter easier to understand. Difficulties with earlier cohorts were analysed and learning ‘objects’ designed to remedy them were incorporated into the VLE. The article described the development of the VLE and student assessments of the usefulness of the site and mentioned improved pass rates over two years. Crook and Barrowcliff (2001) investigated the use that students made of computing facilities and usefully, in contrast with the other studies reviewed, linked academic study with other uses of computing technology. Crook and Barrowcliff investigated the use of networked computers in students’ bedrooms for a random sample of 34 campus-resident second- and third-year students at a UK university. This was done through system monitoring logs and self-report interviews. This analysis revealed a “mobile and multitasking style of engagement” (Crook and Barrowcliff, 2001, p. 245) whereby students combined academic study with more playful uses. Crook and Barrowcliff concluded that computers can empower study and research, but also recreational activities, and that the environment of the desktop challenges the user’s capacity to focus on and prioritise particular academic tasks.

4.3.2 Enhancing student learning Studies on enhancing student learning take a longer-term view of the role of technology in higher education. As Laurillard explained, innovations “have to ride each new wave of technological innovation … and drive it towards the quality agenda... so that it achieves its promise of an improved system of higher education.” (Laurillard 2005). Laurillard’s work on the narrative structure of multimedia programmes (Laurillard 1998) illustrates this approach. Drawing, like others who investigate how ICT can be used to develop and improve student learning, on Gordon Pask, she broke down the teaching-learning process into a sequence to examine how different media might support different stages in that process. The study consisted of an evaluation of a teaching programme based on this analysis which aimed to create an environment in which learners explore, with support, what it means to be a scholar in a specific academic field, in this case a course on Homer, poetry and society. Results were presented as qualitative responses from students. The most valuable contribution of

The student learning experience in higher education 57

The Higher Education Academy 2008

the paper is the extent to which it illustrated the pedagogic aspects of programme design. Several other studies look to notions of communities of practice to underpin the development of interactive learning environments. Hall (2003) stands out as a rare example of a study that looks at more than one institution. Hall reported on a HEFCEfunded evaluation of on-line learning in history, examining the impact of the web on the nature of learning communities, and the extent to which the web can promote learners’ engagement as participants in learning communities. The rationale for the HEFCE initiative was pedagogic: interpreted as enabling students to become more interactive learners and enabling tutors to better focus the contact time they had with students. The evaluation gathered information from 14 history departments as well as from departments of design, English, health and religious studies. This involved over 1,500 students and 75 academic or support staff. The evaluators found that setting ground-rules at the outset of courses was crucial. They also found that expectations need to be connected with wider institutional values and students can become partners in shaping those expectations. Other, more focused, studies reflect Laurillard’s concern with deploying educational technology for specific purposes in a course of study. Ellis and Calvo (2006) investigated the quality of student learning experiences in face-to-face and on-line discussions in the third year of an engineering degree. They used three questionnaires: conceptions of learning through discussions, approaches to learning through discussions, and perceptions of learning through discussions, with 115 students. Cluster analyses revealed two groups of students: one group who experienced discussions as a way of understanding the topic being studied (aligned with deep approaches to learning) and the other group experienced discussions as a way of aiding reproduction (aligned with surface approaches to learning).

4.4 Summary and implications The research base reviewed here is of mixed quality, and there are clear limitations. Obvious absences are large-scale studies that connect across courses and programmes or institutions, and experimental and longitudinal designs. It is difficult to get a picture of how the innovations discussed here impact on the student learning experience as a whole, or how that experience might change over time. The research base is compartmentalised. It would be difficult to integrate what these studies have to say about how approaches to teaching, curriculum development, and learning environments impact on the student learning experience in a systematic manner. There are few attempts in the UK research literature to look across studies to build a broad-based understanding of higher education pedagogy and how it is experienced by students.

The student learning experience in higher education 58

The Higher Education Academy 2008

Nevertheless, some common findings emerge. First, collaborative modes of learning and opportunities for discussion appear to enhance the learning experience for at least some students, promoting motivation, autonomy, and enabling them to acquire transferable skills. Second, both tutors and students need guidance and support in introducing pedagogical innovations that might differ from previous experience and challenge prior expectations. These innovations might, therefore, prove resourceintensive. Third, these studies indicate the difficulty of altering individual courses while institutional environments, norms and power relations stay the same.

The student learning experience in higher education 59

The Higher Education Academy 2008

5. Student perceptions of learning 5.1 Overview of research literature on student perceptions of learning This category of non-inventory-based research on ‘student perceptions of learning’ includes more than 60 studies covering a wide variety of topics and themes. The term ‘perceptions’ is used in an inclusive, rather than a phenomenographically precise, sense. All the papers within this category are descriptive, predominantly using surveys and interview methods. On the whole, the papers also tend to use terms like ‘perception’ and ‘experience’ in fairly loose ways. The weaker ones seek to classify student learning experiences on mono-dimensional scales of quality/satisfaction/depth of learning, in a way that parallels the inventory-based research. Many are primarily based on samples from a single institution or from a single course. These considerations limit the quality of this research and its utility. A few studies recognise these limitations and seek to complement survey findings with qualitative case studies (McCune 2004; Sutherland 2003b). However, these case studies tend to be used to complement rather than develop or challenge dominant research paradigms, such as the widely-accepted distinction between ‘deep’ and ‘surface’ learning. For the sake of analytical ease, the category as a whole is best understood as consisting of three broad sub-groups of research, of which the first is the largest, accounting for almost 30 items. The groups are: 1. Perceptions of learning by particular groups of students; 2. Perceptions of learning by subject and teaching method; 3. Perceptions of learning by developmental stage. Even though a few of the strongest papers (e.g., Yorke 2000) are not easily reducible to one of these groups, and may even overflow the category itself, the groupings still offer a useful map of general trends and emergent themes in the literature In the following, examples of studies within each of these three groups are discussed in some detail. Rather than attempting to provide a comprehensive but superficial overview of all studies in this area, these discussions exemplify the main ways research in this section of papers is investigating the student learning experience..

5.2 Perceptions of learning by particular groups of students All students have their own histories that shape their experiences and approaches to learning (Houston and Lebeau 2006). Nearly half of the papers (the single largest group within the category of ‘student perceptions of learning’) explore the learning

The student learning experience in higher education 60

The Higher Education Academy 2008

experiences of particular categories of students (or ex-students) who face particular challenges, who have historically been disadvantaged, or who perceive themselves as discriminated against. This includes ‘mature’ students (Castles 2004; Crozier and Garbet-Jones 1996; Murphy and Fleming 1998; Haggis 2003), female students (Lawrence, Ashford and Dent 2006; Leathwood 2006; Smith 2004a), non-traditional, low-income and working-class students (Bamber and Tett 2000; O’Connor 2003; Cooke et al 2004), students living at home (Slack and Casey 2002), and students with a disability (Hall and Tinklin 1998; Borland and James 1999; Fuller et al. 2004; OrsiniJones et al. 2005). A significant proportion of the papers focus on the learning experiences of so-called ‘mature’ students. Bamber and Tett (2000) offered a useful two-way model of the process of change, change by students themselves negotiating their own ‘transformations’ in expectations towards what may initially be an ‘alien’ learning environment, and change by institutions in addressing the specific ‘needs’ of these students. Castles (2004) provided a useful review of the literature on the factors affecting ‘persistence’ amongst adult learners. Haggis (2003b) provided an influential challenge to the dominant psychological paradigm within this field. Research into the gendered construction of learning explores a variety of approaches, and throws up fascinating disjunctions that make it hard to draw broader conclusions. In one case higher grades by women were attributed to a macho sporting culture (Smith 2004a). Another article stressed the gendered nature of coping strategies that resulted (Lawrence et al. 2006), and a third focused on the gendered nature of discourses about the ‘independent learner’ (Leathwood 2006). The four papers on the learning experience of students with disabilities are important and valuable, even if they focus less on rethinking the actual learning process than on the factors that impact on ability to learn. They have important policy implications for higher education institutions seeking to meet recent disability legislation in providing support for learning but also for the attitudes of teachers themselves. Fuller et al. (2004) pointed to the different levels of support received by students within a single higher education institution, along with diversity of willingness to seek support. They also noted the importance of access to information, and the need to prioritise quality as much as equity. Most of these papers are empirical institutional case studies based on surveys carried out in one institution. With some exceptions (such as the very practical and urgent findings from the research carried out on students with a disability), this tends to limit their applicability and relevance for others. O’Connor (2003) challenged this reliance on a single institution by using both qualitative and quantitative methodologies to explore the conditions shaping learning in two different new universities. The triangulation of methods allowed him to address, if only in passing, all the different institutional conditions mediating university learning (see also Houston and Lebeau 2006).

The student learning experience in higher education 61

The Higher Education Academy 2008

Included in this section are a related group of papers that address some aspect of student finance and its impact on the student experience: a relatively new area, and a topic of growing importance. This includes studies exploring the learning experiences of those pursuing extensive term-time employment (Curtis and Shani 2002), those receiving bursaries (Hatt et al. 2005), and papers relating retention and noncompletion (often for financial reasons) to student learning experiences (Thomas 2002b; Yorke 2000). The findings are often interestingly counter-intuitive. While Curtis and Shani point to students’ (many of whom are effectively working as much as they are studying) perceptions that they are disadvantaged in course work, they argue that the experience is beneficial for those pursuing a business studies degree. On a different theme, Hatt et al. (2005) demonstrate that those students with bursaries at two West-country universities had more successful first-year experiences. Their interviews with students suggested that bursary holders were more focused on educational goals than classmates, and linked their determination to succeed to their backgrounds. The relationship between non-completion and student learning has also been examined in a more systematic and high-level way by Yorke (2000). His paper, based on a sample of more than 2000 non-completers, argues that dissatisfaction with the quality of the student experience (usually because of the wrong choice of subject) was an important factor in deciding not to continue studies. At a more theoretical level, Thomas makes the case that institutions have to develop an inclusive institutional habitus as a way of helping students to accumulate the ‘social capital’ they need to make a success of their studies (Thomas 2002a, 2002b). A similarly theoretical contribution is made by Mann, who draws on Lacan to reflect on student alienation and engagement (Mann 2001).

5.3 Perceptions of learning by subject and teaching method A second, and smaller, group of papers focus on student learning in specific subjects. This includes literature on students in medical school (Miller 1993), law (Baderin 2005), geography (Dalton 2001, Bradbeer et al. 2004), education (Davies and Hogarth 2002, 2004), mathematics (Cullingford and Crowther 2005, Drake 2005), sports studies (Lane et al. 2004), fashion design (Drew et al. 2000), and biology (McCune and Hounsell 2005; Turner 2004). All these papers point to some of the specific issues faced by those teaching the disciplines. A few offer valuable disciplinary-specific perspectives on student learning, and have relevance outside the immediate disciplinary community being addressed. An excellent example would be the work of McCune and Hounsell (2005), which offers a model of how to link together a variety of forms of evidence to explore the evolution of student learning over the period of a degree course.

The student learning experience in higher education 62

The Higher Education Academy 2008

Others explore student perceptions and experiences of particular methods, such as Laurillard’s work on problem solving (1997), or students’ experiences within particular institutions, such as Ashwin’s work (2003, 2005) on the Oxford tutorial. A further group, of mixed value and utility, explore student experiences of e-learning. Subjectspecific examples include studies of the attitudes of business studies students (Selwyn et al. 2000) and of the health professions (Stokes et al. 2004) towards the internet.

5.4 Perceptions of learning by developmental stage A final group of descriptive papers explores the importance of time and temporal staging of the student learning experience. Acknowledging the importance of the transition into university, several papers focus particularly on the first-year learning experiences (McCune 2004; Sutherland 2003b; Sanders et al. 2000). This work overlaps with that discussed in section 3 on induction and transition. Sanders et al. (2000) stands out because its findings on student expectations of teaching are based on a large sample of students on three different courses at three different universities. At the other end of the honours degree course, one paper explores the role of the final-year dissertations in shaping student learning experiences (Todd et al. 2004) Another group of studies explores the way in which students do (or do not) perceive their skill development over the course of a degree (Lucas et al. 2004, Burke et al. 2005, Drew 1998, Drew et al. 2000). Drew (1998) recognises the limits of categorising students on a narrow developmental hierarchy, and seeks to complement this with case studies. Another study points to the value of a concept like learning journeys (Hughes et al. 2006). Finally, an unusual paper focuses on student emotions and their relationship to learning (Beard 2005), offering a potentially different way of exploring student learning.

5.5. Summary and implications What are the most important findings to emerge from this literature? Some will find the subject-specific and topic-specific findings to be particularly useful for informing policy and practice – for example, supporting students with disabilities. Some of the emerging findings from research into the relationship between paid work, debt and student learning are also important. With the cap on top-up fees only in place till 2009, the challenges of financial pressures on an increasingly diverse student community means that research into this aspect of the student learning experience will be a priority. There are also important conclusions to draw about the research process itself. The strongest papers are those which triangulate different quantitative and qualitative methodologies, bring together research spanning more than one institution, or bear extrapolation beyond a single case study itself.

The student learning experience in higher education 63

The Higher Education Academy 2008

6. Assessment and feedback There is a consensus in the literature that assessment and feedback are central to student learning and sense of identity, and that the curriculum for students can often be defined by the messages sent by the assessment regime. It is unsurprising therefore to find a number of studies that focus on assessment and how it impacts on the student learning experience.

6.1 Overview of research literature in this section The literature examined in this area is made up of relatively small-scale studies, often related to one course in one institution and conducted by module tutors or leaders. (These contrast with the larger-scale reviews of international literature (Dochy et al. 1999; Gibbs and Simpson 2003, 2004). Often, these UK studies are the results from practitioner research related to isolated innovations in one module that do not connect to the wider student experience. However, there are also studies that aim to ask students directly how they perceive and act on feedback and assessment. In these cases, while the focus is on one module or course, this is connected explicitly to their experiences of assessment and feedback more generally. An indicative selection of 27 articles are reviewed under this heading. Descriptive and evaluative designs were dominant, with one piece of action research, two inventory studies, and one review. Most of the research examined involved student self-report in some form, often alongside the views of academic staff. Some studies also included assignment marks and module grades. Other studies, which aimed to examine student perceptions of assessment and feedback, use qualitative methods and focus exclusively on student views Several studies adopt a constructivist framework, most obviously in the emphasis on group- and self-assessment methods, approaches also linked to increased student numbers and a broader employability agenda. Traditional assessment methods such as the unseen examination do not receive the same attention. Nonetheless, the research under this section appears sensitive to ambiguities in student attitudes to assessment. It also points to potential tensions between an institutional focus on ‘objective’ summative measures and the judgement of academic staff, and the aim of empowering students through peer- and self-assessment. Most papers adopt the standard distinctions between formative assessment (sometimes labelled feedback), which is intended to assist student learning, and summative assessment, which is intended to check the level of learning at the end of a module or programme. However, the boundaries between the two are blurred. For example, James (1997) reports that students use feedback as a benchmark measure,

The student learning experience in higher education 64

The Higher Education Academy 2008

while traditionally summative modes of assessment can be used in a formative manner to develop future learning (Fellenz 2004). The literature in this section falls into two main categories, related to the central focus of the research (forms of assessment or feedback, or students’ experiences of assessment and feedback). 1. Studies that examine different forms of assessment and their impact on student learning. 2. Studies that examine student experiences of and perceptions of feedback and assessment as the main focus of their research.

6.2 Forms of assessment 6.2.1 Multiple choice Multiple choice is often associated with summative assessment of low cognitive level learning with little opportunity for formative feedback, and surface approaches on the part of the student (Gibbs et al. 1997). Fellenz (2004), however, reported on an assignment in his module designed to use multiple choice for formative purposes. Students were required to work individually and in small groups to develop multiple choice items, to justify the correct and incorrect answers, and to determine what cognitive levels the item was testing. High quality items were included in the end-ofcourse examination. Students identified benefits of enhanced understanding of course content, motivation, and ownership of the assessment process, but perceived the exercise as challenging and required high levels of tutor support. Fellenz concluded that, with appropriate support, students were able to engage with the course subject matter at high cognitive levels.

6.2.2 Continuous assessment in tutorials Two studies examined here address continuous assessment in tutorials and its impact on student learning. MacMillan and McLean (2005) reported on the results of restructuring a first-year module and particularly the module assessment. Assessment in the module was re-centred around the three tutorial sessions, with students required to provide pre-tutorial briefing papers and post-tutorial evaluation reports. Trotter (2006) discussed continuous summative assessment in the form of tutorial files that contributed to their final module grade. In both cases, students reported that they valued frequent feedback that enabled them to focus on improving their work, and that the stimulus of assessment led them to work hard on the module and prepare carefully for tutorials. The students interviewed by MacMillan and McLean also noted that they studied in more detail and depth than under the standard examination/essay format. Neither study associated the innovation with a clear improvement in learning outcomes, although tutors interviewed by MacMillan and McLean noted improvements in written work, particularly in argumentation skills. Limitations noted in both studies included the heavy tutor workload associated with providing timely feedback on The student learning experience in higher education 65

The Higher Education Academy 2008

regular assignments. Trotter also found that different students responded differently to the demands of submitting work regularly, with many enjoying the challenge but others finding it enervating.

6.2.3 Peer and self assessment A number of studies examine peer assessment. Studies that examine student perceptions of the exercise (Smyth 2004, Langan et al. 2005, MacDonald 2004) zoo reveal common findings. Students initially lack confidence in their ability to judge or critique the work of fellow students, but, with appropriate guidance and scaffolding to help students understand assessment criteria, active learning can be enhanced and students can develop the ability to evaluate and think critically. These findings reflect closely the benefits of self and peer assessment reported in Dochy et al.’s (1999) review of international literature. Langan et al. (2005), however, raised doubts about the ‘validity’ of peer assessment. They found that the grades awarded by student assessors were on average 5% higher than those awarded by tutors but that the difference was less for those students who were involved in developing the assessment criteria. Similar issues arise in relation to student attitudes to self-assessment: uneasiness at having to evaluate their work, insecurity, a feeling of difficulty even where they had previous experiences of self-assessment (Somervell and Allan 1995; Fitzpatrick 2006). Fitzpatrick’s (2006) discussion of self-assessment in the context of a third-year community nursing module is particularly useful. Self-assessment was deemed appropriate for this communications module, where critical examination of skills related to students’ practice was required. Fitzpatrick noted initial resistance not only from students but also from faculty colleagues, and a tension with norms of tutor assessment and concerns about plagiarism in the higher education institution as a whole. Self-assessment, she argued, is risky for students and staff involved and providing appropriate support materials and guidance requires time and effort from tutors. Nevertheless, students felt they developed critical thinking skills, assertiveness, and a greater sense of responsibility for their own learning and practice (benefits also noted by tutors and clinical practice staff).

6.3 Student experiences and perceptions of feedback and assessment Other studies examine how students approach and perceive assessment and feedback, and how it relates to their learning more generally. Sample sizes are small and findings cannot be generalised systematically but common themes can be identified.

The student learning experience in higher education 66

The Higher Education Academy 2008

First, assessment can dictate how students approach their learning. A common finding in this research is that students focus on what is assessed (hence the attempts to change the assessment regime in order to change what and how students learn described by MacMillan and McLean (2005) and Fellenz (2004)). Thomson and Falchikov (1998) conducted a revealing study with first-year students on three different degree courses at Napier University; social and management sciences, engineering, and publishing. Qualitative interviews revealed a tendency to focus only on what is examined, and to ‘rush’ their learning. Comparing interview comments with Approaches to Study Inventory scores pointed to a potential conflict between student belief and action: while students seemed aware that if they managed their time effectively they could deal with assessment in a way that helped their learning, they often ended up rushing and therefore approached their assessment superficially. Thomson and Falchikov also found that students held complex and contradictory attitudes to assessment. On one hand, examinations were perceived as a hurdle to be overcome and of no value other than enabling them to complete the course, on the other hand, they were seen as a helpful indicator of what students had achieved and learned. The importance of assessment in students’ learning behaviour could be associated with the central role it seems to have in forming their sense of identity and achievement as a learner. This may be a particular issue for some groups of students. For example, for the mature students interviewed by James (1997), performance in assessments was essential to their confidence in their abilities as a student, and they placed great importance (some admitted too much importance) on the numerical grade they achieved (see also Higgins 2000). Similarly, Pitts (2005) noted the importance of assessment for music students in her study, for whom performance and musical ability are central to their identity as musicians. A sub-set of this research focuses on feedback, intended for formative purposes, and how students perceive and use it (Hinnett 1998; Higgins 2000; Weaver 2006; Pitts 2005; Orsmond et al. 2005). Some consensus emerges around what students want out of feedback: timely, clear, constructive, positive in tone (including when critical points are made), with detailed and individualised comments. Different levels of feedback are required at different stages in a course: MacDonald (2004) for instance found that students required a lot of guidance in their written work in the early stages of a course. She noted that networked technology provided opportunities for feedback and online discussion to provide the support required. However, obstacles to meeting students’ expectations have been identified. Pitts’ (2005) case study in a music department indicated issues of departmental procedures (whereby written feedback was frequently not passed on to the student but placed ‘on file’), time constraints on tutors, and also tutor perceptions of feedback as a bureaucratic process rather than part of student learning. She described mild “underlying dissatisfaction” with the current situation on the part of students, and to a lesser extent among staff, but “little impetus for change”. This echoes Crook et al.’s

The student learning experience in higher education 67

The Higher Education Academy 2008

(2006) argument that well-organised bureaucratic processes around assessment might conceal dissatisfaction among students and inhibit them from expressing unease. Weaver (2006) found that students had limited opportunities to act on feedback in a modular system when most coursework was summatively assessed and modules were completed before feedback was received. This body of research also reveals complexities and difficulties around students’ use of feedback. Studies reveal that detailed guidance and advice (Weaver 2006) and opportunities for spoken dialogue (Macdonald 2004) are needed to help students to engage more effectively with feedback and to build on it in future work. Such measures are obviously resource-intensive. Other research, however, points to deeper issues that may not be solved by such measures. For instance, Pitts (2005) found that feedback comments were open to a range of, sometimes contradictory, interpretations, and that students were able to articulate what they wanted from feedback clearly to a researcher but not to their tutors. Other authors draw on the ‘academic literacy’ approach of Lea and Street (1998) and Ivanic (1998) among others to reflect on power dynamics within higher education and other complexities which affect how students approach and use the written feedback they receive. The academic literacy perspective highlights that tutors’ comments are underpinned by tacit subject-specific discourses and that students, who are not yet initiated into these tacit discourses, may struggle to understand comments for this reason (Higgins 2000; Rust et al. 2005).

6.4 Summary and implications Some of the weaknesses in the research base described previously apply here: experimental studies are absent and cross-course and institutional designs are rare. However, the better studies in this section, though still small-scale, are often sensitive to the institutional environment and complexities of students’ attitudes and actions. Some of the research on feedback in particular provides a model for examining sensitively students’ perspectives on this important aspect of the student experience, and how these perspectives are linked with institutional procedures, priorities and power relations. Of the main common substantive findings, perhaps the most obvious is that several potential obstacles lie in the way of ensuring that assessment and feedback enhance the student learning experience. The assessment practices described above and the forms of feedback desired by students are resource intensive. Institutions prioritise ‘objective’ summative measures and the judgement of academic staff, priorities that might be in tension with utilising assessment and feedback to benefit student learning. Students also display complex and contradictory attitudes, motivations and actions. These two factors together could account for the difficulty noted in a number of studies in facilitating meaningful change.

The student learning experience in higher education 68

The Higher Education Academy 2008

7. Student learning experience as a measure of quality in higher education 7.1 Overview of research literature in this section This section reviews items concerned with the investigation of student learning experience in the context of quality assurance in higher education. The discourse on quality issues in higher education is international in scope and goes back to the 1970s and 80s. Therefore, this section draws on a wider basis of literature than other findings sections of the report. In order to capture the wider discourse on quality it was necessary to use a different strategy outlined below. As the development of and the findings drawn from instruments to measure the quality of higher education in relation to the student learning experience are discussed in some detail, this section draws on US and Australian literature that prepared the ground for the application of similar instruments in the UK. Findings from UK-based studies are discussed in the later parts of this section. The papers cover investigations at the national level (National Student Survey), the institutional level, the programme level (including studies in a variety of subjects), and the class level.

7.2 Literature search strategies In addition to the common search strategies used for selecting research papers in other sections of this study, this section on student learning experience as quality of higher education selected materials from a number of other sources: 1) Websites of the key UK, European and Global agencies involved in quality assurance of higher education, including Higher Education Funding Council for England; Quality Assurance Agency for Higher Education (QAA); Universities UK; Council of Europe; European University Association; EAIR, The European Higher Education Society; Centre For Research In Higher Education Policies (CIPES); and International Network for Quality Assurance Agencies in Higher Education (INQAAHE). Reports, conference papers and policy documents are included. 2) Websites of some institutional student experience and quality assurance centres, including Centre for Research and Evaluation at Sheffield Hallam University; Centre for Research in to Quality at University of Central England in Birmingham; Center for Postsecondary Research and Planning at Indiana University; and Center for the Study of Evaluation at University of California Los Angeles.

The student learning experience in higher education 69

The Higher Education Academy 2008

3) Hand searches in specific peer-reviewed journals including Assessment and Evaluation in Higher Education; Quality in Higher Education; Quality Assurance in Education; Journal of Educational Psychology. 4) Books published by the Society for Research into Higher Education and Open University Press at Buckingham, and Jossey-Bass at San Francisco. Like the above journals, the two publishers are identified because they have reputation for publishing high-quality books in student experience and quality of higher education. 5) The American educational database ERIC. Only materials directly related to student experience and quality of higher education are included.

7.3 Assessing student learning for quality assurance 7.3.1 Background Recent literature argues that in assessing quality in higher education it is important to take student learning and development into account. However, the student learning process is difficult to measure. Pring (1992) points out that student learning in university is a slow and incremental process and cannot be defined in absolute terms, and that it is difficult to determine when and how knowledge has developed. To overcome the technical difficulty in assessing student learning experience from an external perspective, some recently-developed attempts to evaluate quality of higher education are based on evaluation directly provided by student. Student evaluations became a component of quality assurance systems in the US during the 1980s and 1990s. As Murray et al. (1990, p. 250) state, “Student ratings have gained widespread acceptance over the past 20 years as a measure of teaching effectiveness in North American colleges and universities”. There are several rationales for implementing student evaluation. Given that individual students are the primary beneficiaries of the higher education experience, that students invest considerable time and energy while attending higher education, and that they have a significant amount of contact with instructors, the use of student evaluation seems justified and should be given substantial weight. Some research in this area (Cashin 1988; Marsh and Overal 1980) supports the stability, validity, reliability and usefulness of students’ perceptions of learning as a criterion for the evaluation of teaching effectiveness.

7.3.2 Instruments for measuring student experience Several surveys of current and graduating students have been created and administered at the institutional and national level. These have been used as a way to The student learning experience in higher education 70

The Higher Education Academy 2008

evaluate students’ experience and institutional quality, and as an important source for empirical research on the impact of higher education on students (Ewell and Jones 1994). Inventories have been developed with the aim of eliciting students’ evaluations of teaching effectiveness and educational quality. These also have a long history, which is well reviewed by Marsh (1987, in press). Marsh (1987, p. 369) concluded that “students ratings are clearly multidimensional, quite reliable, reasonably valid, relatively uncontaminated by many variables often seen as sources of potential bias, and are seen to be useful by students, faculty, and administrators”. Many of these instruments reviewed by Marsh focussed upon evaluations of individual teachers or course units. However, as Richardson (1991, p. 59)) pointed out “… from the perspective of an institution of higher education seeking to maintain and improve teaching quality, it could be argued that the appropriate focus of assessment is rather an entire degree programme”. The Course Perceptions Questionnaire (Ramsden and Entwistle 1981), subsequently developed into the Course Experience Questionnaire (CEQ) by Ramsden (1991), provides an instrument that could be used as a performance indicator of teaching effectiveness, at the level of the whole course or degree. The CEQ is the most common instrument used in the British literature for this purpose. It is based “on a theory of university teaching and learning in which students’ perceptions of curriculum, instruction and assessment are regarded as key determinants of their approaches to learning and the quality of their learning outcomes” (Wilson et al. 1997, p.33).7 Inventories such as the CEQ are often based not on evidence about individual differences but on evidence about those features of courses that students experience that are related to the approaches students take to their studying. The instruments provide information about important aspects of teaching about which students have direct experience: quality of teaching, clear goals and standards, workload, assessment, emphasis on independence (Wilson et al. 1997). There are clear links with Approaches to Study inventories (see section 1 of findings). Pace developed the College Student Experiences Questionnaire (CSEQ) (Pace 1980, 1984, 1985, 1987) on the premise that the more students engage in educational activities, the more they benefit in their learning and development. The third version of the questionnaire featured fourteen Quality of Effort scales, eight College Environment items and twenty-three Estimates of Gain items. Quality of Effort is considered as a key dimension for understanding student satisfaction, persistence, and the effects of attending college, because it provides an estimate of the contributions students make 7

The various instruments designed to measure approaches to studying or to assess teaching effectiveness used in the literature reviewed are largely underpinned by theories of learning derived from cognitive psychology or phenomenography. As such they tend to ignore affect which is clearly a key issue in the student learning experience. This is a gap in the literature as are investigations of self-concept held by learners in higher education.

The student learning experience in higher education 71

The Higher Education Academy 2008

to their own learning process as well as the educational resources that the institution has to offer. The Quality of Effort scales include experience with library, classroom, faculty, clubs, theatre. The College Environment items assess students’ perceptions of some of the college environment aspects that have been proven to be related to learning. The Estimates of Gains scales ask students to make value-added judgement about efficacious educational practices and outcomes. Perhaps the key finding from the CSEQ was that quality of effort is a sound indicator of quality in undergraduate experience. The National Survey of Student Engagement (NSSE) established in 1999 in the United States is a tool to measure undergraduate students’ engagement in programmes and activities in higher education institutions and perceives the result as an indicator of quality. Each item on the survey correlates to one of five benchmarks of empirically confirmed ‘good practices’ in undergraduate education: level of academic challenge, active and collaborative learning, student–faculty interaction, enriching educational experiences and supportive campus environment (Kuh 2002). According to Coates (2005, p.33) the NSSE “has played a critical role in identifying good practices which provide substance for composite indicators and in moving the topic of engagement into the realms of public and institutional policy and practice”. The National Student Survey (NSS) was established in the United Kingdom in 2005 following the recommendations made by a task group on revising the quality assurance framework in 2002. The survey, described by HEFCE as an “essential element of the revised quality assurance framework for higher education”, is intended to inform the public and especially prospective students in choosing higher education institutions (Higher Education Funding Council for England 2007). Some, however, (Prosser 2005) suggest that student surveys of this nature should not be used to make comparisons between institutions.

7.3.4 Discussion of instruments and findings Harvey (2005) suggests that traditional standards of quality, related primarily to reputation and resources, are mainly relevant to old universities. It seems that using student experience as an indicator can redress this balance. In the result of the 2006 NSS survey many traditionally “unfashionable” universities received the highest scores from their students (Smith 2006; Times Higher Education Supplement 2006; University of Buckingham 2006). The vice-chancellor of the top-ranked university claimed, “Our ethos is traditional teaching … I don’t think all universities have this. Some places are obsessed with the research assessment exercise and rankings” (Ward and Jayanetti 2006). Higher education institutions appear to respond to such surveys by shifting attention and resources to address areas of underperformance (for example extending library hours and improving student consultation services) (Shepherd 2006). Still, it is questionable whether student satisfaction has much impact on widely used indicators like the newspaper league tables.

The student learning experience in higher education 72

The Higher Education Academy 2008

The Open University and Sheffield Hallam University have used the Assessment Experience Questionnaire (AEQ)—developed from Gibbs’ and Simpson’s identification of conditions under which assessment supports student learning (Gibbs and Simpson 2003, 2004)—as a teaching evaluation tool. These higher education institutions surveyed more than three thousand students over three years. Responses from academic staff indicate that the tool is reasonably valid and can lead to improvement of teaching performance and, therefore, student learning (Hills 2005). The results of surveys like NSSE and NSS are also often used to identify benchmarks across higher education institutions. However, Marsh and colleagues (Marsh et al. 2002) warn that the basis for making comparisons across institutions or across same disciplines/departments at different institutions is limited. Marsh et al. studied the Australian Postgraduate Research Experience Questionnaire, and showed that the questionnaire failed to differentiate performance of institutions. The authors further argued that in general quality variation across universities was not significant, and therefore student evaluation surveys cannot provide a useful basis for benchmarking universities. The authors asserted that the best unit of analysis for student evaluation is the teacher or the class. To ensure that feedback from students can successfully be collected, interests of the key stakeholders in this process (staff and students) need to be addressed, and incentives might be required. The literature suggests that students are concerned about the confidentiality of their responses and will be more likely to take part if they feel empowered by the process, whereas staff may become more active if the result can bring positive impacts on promotion or salary. Richardson, referring to the work of Spencer and Schmelkin (2002, cited in Richardson 2005a), also found that students need to be kept informed of how their feedback would be used, or they might dismiss the feedback mechanisms. It has also been argued that student evaluations have been used primarily by institutional administrators to evaluate the performance of teaching staff, resulting in reduced emphasis on improvement and increased resistance from academic staff. Not all academics are comfortable with using student satisfaction to evaluate institutional or course quality. Some argue that student experience evaluation is unable to penetrate the complexities of higher education: ratings from students on teaching do not measure the quality of the students’ learning experience in a comprehensive way, but instead establish how students evaluate aspects of teaching (Johnson 1998). Others remain sceptical about the reliability and value of student feedback. For example Richardson (2005a, p.407) claimed that "resistance to the use of student ratings has been expressed based on the ideas that students are not competent to make such judgements or that student ratings are influenced by teachers’ popularity rather than their effectiveness". Some argue that students’ ability to give accurate answers is compromised in answering higher inference questions, as opposed to lower inference, relative factual questions (Pascarella 2001). However, there is some consensus that when compared to other quality indicators students’

The student learning experience in higher education 73

The Higher Education Academy 2008

evaluation is valid, reliable, stable and useful in assessing quality of higher education (Ballantyne et al. 2000, Hills 2005, Kwan 1999, Marsh 1984, 1987). Careful attention should be given to the interpretation of answers to questions on “satisfaction”, because satisfaction is a broad and complex concept with numerous dimensions, and a student may be satisfied in more than one way (Wiers-Jenssen et al. 2002). Student satisfaction questionnaires or ‘exit surveys’ need to be based on a solid theory or conceptualisation of student learning if the improvement of student learning is an aim. Moreover, these dimensions of satisfaction have the potential to be linked to personal characteristics (Green et al. 1994). When analysing the data, student attributes also need to be taken into account. Surridge (2006, p.132) argued with reference to the NSS that “’raw’ figures do not take into account the characteristics of students, their courses and the institutions in which they study” and therefore that the data “may produce at best misleading and at worst invalid measures of teaching”.

7.4 Summary and implications In recent years, higher education institutions and their stakeholders have started to pay greater attention to student experience in higher education. This attention can be connected to the move to a ‘mass’ or ‘universal’ higher education system, to technical and statistical developments in the use and role of performance indicators, to the rise of the notion of students as customers, and to the increasing importance of accountability of publicly-funded institutions. It has been argued that the collection of student evaluation data, either through questionnaire surveys or through other mechanisms can yield useful information and can help institutions to understand students’ experiences and needs. This requires empirical instruments to be developed on the basis of a sound conceptual and theoretical basis and to be implemented systematically, requirements that are often not met. However, higher education institutions can make use of knowledge of student experience to examine institutional policies and practices, to identify weak areas in their provision, and to assist in improving the quality of student learning (Student Experience Special Interest Group of EAIR 2005). Again, this requires a systematic and theory-driven analysis of information on student experience and a conceptual framework that ensures that measures aimed at improving student experience are developed and evaluated rigorously. This section of the review outlines studies that assess and compare the quality of programmes or institutions from a student-centred perspective. Some of the research in this area examines the validity and reliability of the instruments. The identified weaknesses mean that student experience data should always be interpreted with caution. The literature generally agrees that quantitatively-oriented investigations of student experience need to be used alongside other tools if they are to contribute to processes of quality assurance and enhancement.

The student learning experience in higher education 74

The Higher Education Academy 2008

Discussion and implications The research base The systematic approach to searching for literature on the student learning experience has yielded a large number of items relevant to this area of research. However, the review of this literature identified a broad, heterogeneous and somewhat scattered research base. The analytical map developed for this report reveals a variety of research approaches applied to investigating a wide range of aspects of students’ learning experience. Phenomenographic studies, and inventory methods, are dominant approaches in the literature reviewed. Some of these are large-scale studies employing relatively sophisticated statistical methodologies. This kind of research has contributed valuable insights into the links between teaching, students’ and teachers’ perceptions of learning, and students’ approaches to studying (typically categorised as ‘deep’ and ‘surface’). The best of these studies have made systematic connections between students’ previous experience, input factors, processes and outcomes of learning processes and their influence on the student learning experience. Many others, however, tend to replicate and endorse rather than question or develop in a significant way the dominant assumptions and approaches to conducting such research. Many researchers of the student learning experience are practitioners, researching their own subject areas. For this reason small, localised and sometimes impressionistic studies are dominant. These studies may be highly useful for course development purposes, but their dominance is problematic for constructing and evaluating a research base. There is an absence of rigorous evaluation methods, experiments, or longitudinal designs. These smaller studies are characterised by rather vague notions of individualised, constructivist or cognitive approaches to learning. Broad interpretations of the concept of communities of practice are often applied in the growing field of e-learning. The tendency not to question the basic assumptions of learning in higher education seems to curtail the potential of research to explore new ways of think about learning in the diverse contexts of higher education today. However, there are positive signs. For example, well-funded systematic research, such as current TLRP projects, might be providing strong challenges to the dominant traditions identified in this review, developing different conceptualisations of learning and innovative research methods in large-scale, cross-institutional, longerterm studies.

Recommendations for future research 1. As indicated in several sections in this report, the field is in need of more studies that examine the particular discipline-based characteristics of the student learning experience. This would generate a better understanding of the factors shaping

The student learning experience in higher education 75

The Higher Education Academy 2008

learning in a given subject. Also, although this Review has identified some important insights into features of the student learning experience from phenomenographic and other studies, more research is required both to take these insights further and to find different ways of understanding the student learning experience. 2. Institution-specific studies need to be complemented by cross-institutional investigations. Only by doing this can research findings and recommendations be made applicable across the sector. 3. The changing demographic composition of student cohorts requires research into groups such as mature students, part-time students, students with a vocational background, and students who study for a second degree. Research into the student learning experience has to engage with the changing student intake into higher education, and the on-going reform agenda in higher education. 4. Research in this area needs to respond to the changing conditions in which students operate, for instance their need for part-time work. The changing interface between further and higher education also has an impact on the student learning experience. 5. Studies also need to take into account the changing experience of students’ learning before they enter higher education. The introduction of Diplomas for 14 to 19 year-olds from 2008 will alter the pre-higher education student experience. 6. Future research needs to make use of methodological frameworks that capture the mediated and contextualised nature of learning, as well as social and organisational aspects of learning. Also needed are studies that look at student experience in a holistic manner, linking academic learning with other aspects of student life. 7. These recommendations imply the need for longer-term and collaborative research programmes. These programmes would allow researchers to develop further existing methodological frameworks, thereby overcoming some of the limitations identified. In particular, they would allow the interaction of different methodological approaches, missing in current research on the student learning experience. Longitudinal and cumulative research is also required for developing, implementing and evaluating ways of improving the student learning experience in an iterative manner. 8. Increased external funding would act as a valuable supplement to the small, shortterm internal institutional grants used for this form of research. Large-scale, longterm and cumulative research could also contribute to capacity-building across subjects and could allow for collaboration between researchers with experience in the field of education and subject-specialists.

The student learning experience in higher education 76

The Higher Education Academy 2008

References Armstrong, P. K., Croft, A.C. (1999) Identifying the learning needs in mathematics of entrants to undergraduate engineering programmes in and English university. European Journal of Engineering Education. 24(1), 59–71. Ashwin, P. (2003). Variation in students’ experience of small group tutorials. In: Rust, C. (ed.) Improving student learning. Theory and practice 10 years on. Oxford: Oxford Centre for Staff and Learning Development, 251–56. Ashwin, P. (2005) Variation in students’ experiences of the ‘Oxford tutorial’. Higher Education. 50(4), 631–44. Ashwin, P., McLean, M. (2005). Towards a reconciliation of phenemenographic and critical pedagogy perspectives in higher education through a focus on academic engagement. In: Rust, C. (ed.) Improving student learning: Diversity and inclusivity. Oxford: Oxford Centre for Staff and Learning Development. Astin, A. W. (1991a) Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. New York: American Council on Education. Astin, A W. (1991b) The changing American college student: implications for educational policy and practice. Higher Education. 22(2), 129–43. Astin, A. W. (1992) The unrealized potential of American higher education. Innovative Higher Education. 17(2), 95–114. Baderin, M. (2005) Towards improving students’ attendance and quality of undergraduate tutorials: a case study on law. Teaching in Higher Education. 10(1), 435–48. Ballantyne, R., Rorthwick, J., Packer, J. (2000) Beyond student evaluation of teaching: Identifying and addressing academic staff development needs. Assessment and Evaluation in Higher Education. 25(3), 221–236. Bamber, J., Tett, L. (2000) Transforming the learning experiences of nontraditional students: a perspective from higher education. Studies in Continuing Education. 22(1), 57–75. Baxter, A., Hatt, S. (2000) ‘Everything must go!’ Clearing and first-year performance. Journal of Further and Higher Education. 21(1), 5–14.

The student learning experience in higher education 77

The Higher Education Academy 2008

Beard, C. (2005) Student achievement: The role of emotions in motivation to learn emotional maps. Sheffield: Higher Education Academy/Sheffield Hallam University. Becher, T. (1989) Academic tribes and territories. Buckingham: SRHE/Open University Press. Becker, H., Geer, B., Hughes, E. (1968) Making the grade: the academic side of college life. New York: Wiley. Bernstein, B. (1996) Pedagogy, symbolic control and identity. London: Taylor & Francis. Blicharski, J. R. D. (1999) New undergraduates: access and helping them prosper. Widening Participation and Lifelong Learning. 1(1), 34–40. Bligh, D. (1971) What’s the use of lectures?. London: University Teaching Methods Unit. Bligh, D. (2000) What’s the point in discussion?. Exeter: Intellect. Booth, A. (1997) Listening to students: experiences and expectations in the transition to a history degree. Studies in Higher Education. 22(2), 205–20. Booth, A. (2001) Developing history students’ skills in the transition to university. Teaching in Higher Education. 6(4), 487–503. Borland, J., James, S. (1999) The learning experience of students with disabilities in higher education. A case study of a UK university. Disability and Society. 14(1), 85– 101. Bourdieu, P. 1988. Homo academicus. Cambridge: Polity Press. Bourner, J., Hughes, M., Bourner, T. (2001) First-year undergraduate experiences of group work project. Assessment and Evaluation in Higher Education. 26(1), 19–39. Boyle, T., Bradley, C., Chalk, P., Fisher, K., Pickard, P. (2005) Introducing a virtual learning environment and learning objects into higher education courses. International Journal of Learning Technology. 1(4), 383–98. Bradbeer, J., Healey, M., Kneale, P. (2004) Undergraduate geographers’ understandings of geography, learning and teaching: a phenomenographic study. Journal of Geography in Higher Education. 28(1), 17–34.

The student learning experience in higher education 78

The Higher Education Academy 2008

Brooksbank, D. J., Clark, A., Hamilton, R., Pickernell, D. G. (1998) Views from the trenches: lessons from the introduction of WinEcon into a first year undergraduate programme. Computers in Higher Education Economics Review. 12(1), 13–18. Bulpitt, H. M., Peter J. (2005) Learning about reflection from the student. Active Learning in Higher Education. 6(3), 207–17. Burke, V., Jones, I., Doherty, M. (2005) Analysing student perceptions of transferable skills via undergraduate degree programmes. Active Learning in Higher Education. 6(2), 132–44. Byrne, M., Flood, B. (2003) Assessing the teaching quality of accounting programmes: An evaluation of the Course Experience Questionnaire. Assessment and Evaluation in Higher Education. 28(2), 135–45. Byrne, M., Flood, B. (2005) A study of accounting students’ motives, expectations and preparedness for higher education. Journal of Further and Higher Education. 29(2), 111–24. Carr, W., Kemmis, S. (1986) Becoming critical: Education knowledge and action research. London: Falmer Press. Cartney, P., Rouse, A. (2006) The emotional impact of learning in small groups: highlighting the impact on student progression and retention. Teaching in Higher Education. 11(1), 79–91. Cashin, W. E. (1988) Student ratings of teaching: A summary of research. IDEA paper No. 20. Manhattan, Kansas State University: Center for Faculty Evaluation and Development. Castles, J. (2004) Persistence and the adult learner: factors affecting persistence in Open University students. Active Learning in Higher Education. 5(2), 166–79. Clarke, K., Lane, A. (2005) Seminar and tutorial sessions: a case study evaluating relationships with academic performance and student satisfaction. Journal of Further and Higher Education. 29(1), 15–23. Clegg, S. (2004) Critical readings: progress files and the production of the autonomous learner. Teaching in Higher Education. 9(3), 287–98. Clouder, L. (2005) Caring as a ‘threshold concept’: transforming students in higher education into health(care) professionals. Teaching in Higher Education. 10(4), 505– 17.

The student learning experience in higher education 79

The Higher Education Academy 2008

Coates, H. (2005) The value of student engagement for higher education quality assurance. Quality in Higher Education. 11(1), 25–36. Cobban, A.B. (1988) The medieval English universities: Oxford and Cambridge to c. 1500. Berkeley: University of California Press. Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004) Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: LSDA Committee on Higher Education (1963) Higher education: Report. London: HMSO. Cook, A., Leckey, J. (1999) Do expectations meet reality? A survey of changes in firstyear student opinion. Journal of Further and Higher Education. 23(2), 157–71. Cooke, R. (2002) Information on quality and standards in higher education; final report of the Task Group. HEFCE Report 02/15. [online]. Available from: http://www.hefce.ac.uk/Pubs/hefce/2002/02_15.htm [accessed 20 February 2007]. Cooke, R., Barkham, M., Audin, K., Bradley, M., Davy, J. (2004) How social class differences affect students’ experiences of university. Journal of Further and Higher Education. 28(4), 407–21. Cowan, J. (1998) On becoming an innovative university teacher: reflection in action, Buckingham: SRHE/Open University Press. Cowan, J., Creme, P. (2005) Peer assessment or peer engagement? Students as readers of their own work. LATISS – Learning and Teaching in the Social Sciences. 2(2), 99–119. Cox, W. (2000) Predicting the mathematical preparedness of first-year undergraduates for teaching and learning purposes. International Journal of Mathematical Education in Science and Technology. 31(2), 227–48. Crook, C., Barrowcliff, D. (2001) Ubiquitous computing on campus: Patterns of engagement by university students. International Journal of Human-computer Interaction. 13(2), 245–56. Crook, C., Gross, H., Dymott, R. (2006) Assessment relationships in higher education: the tension of process and practice, British Educational Research Journal. 32 (1), 95– 114. Crozier, W. R., Garbert-Jones, A. (1996) Finding a voice: shyness in mature students’ experience of university. Adults Learning. 7(8), 195–98.

The student learning experience in higher education 80

The Higher Education Academy 2008

Cullen, J., Hadjivassiliou, K., Hamilton, E., Kelleher, J., Sommerlad, E., Stern, E. (2002) Review of current pedagogic research and practice in the fields of postcompulsory education and lifelong learning. London: ESRC. Cullingford, C., Crowther, K. (2005) Student learning and mathematics in higher education. Higher Education Review. 37(3), 33–43. Curtis, S., Shani, N. (2002) The effect of taking paid employment during termtime on students’ academic studies. Journal of Further and Higher Education. 26(2), 129–38. Dall’Alba, G. (1991) Foreshadowing conceptions of teaching. Research and Development in Higher Education. 13, 293–97. Dalton, R. T. (2001) What do they bring with them? The fieldwork experiences of undergraduates on entry into higher education. Journal of Geography in Higher Education. 25(3), 379–93. Davies, I., Hogarth, S. (2002) Evaluating Educational Studies. Evaluation and Research in Education. 16(2), 82–94. Davies, I., Hogarth, S. (2004) Perceptions of Educational Studies. Educational Studies. 30(4), 425–39. Davis, M. H., Harden, R. M. (1999) AMEE Medical Education Guide No. 15: Problembased learning a practical guide. Medical Teacher. 21, 130–140. Day, C., Elliott, J., Somekh, B., Winter, R. (eds.) (2002). Theory and practice in action research: Some international perspectives. Oxford: Symposium Books. De Vita, G. (2004) Integration and independent learning in a business synoptic module for international credit entry students. Teaching in Higher Education. 9(1), 69– 81. DGIV EDU HE (2006) The legitimacy of quality assurance in higher education: The role of public authorities and institutions. Strasbourg: Council of Europe Higher Education Forum. Dochy, F., Segers, M., Sluijsmans, D. (1999) The use of self-, peer and coassessment in higher education: A review. Studies in Higher Education. 24(3), 331– 50. Dochy, F., Segers, M., Van-den-Bossche, P., Gijbels, D. (2003) Effects of problembased learning. A meta-analysis. Learning and Instruction. 13(5), 533–68.

The student learning experience in higher education 81

The Higher Education Academy 2008

Drake, P. (2005). Undergraduate mathematics diversified for non-standard entrants whatever next! A case of teaching assistant and the curriculum. SCUTREA Annual Conference. University of Sussex. Drew, L., Bailey, S., Shreeve, A. (2000) Phenomenographic research: methodological issues arising from a study investigating student approaches to learning in fashion design. ECER. Edinburgh. Drew, S. (1998) Students’ perceptions of their learning outcomes. Teaching in Higher Education. 3(2), 197–217. Duff, A. (2004). Understanding academic performance and progression of first-year accounting and business economics undergraduates: the role of approaches to learning and prior academic achievement. Accounting Education. 13(4), 409–30. Edward, N., Middleton, J. (1997) Induction – a contextual approach to the start of engineers’ formation. Scottish Educational Research Association Annual Conference. Dundee. Edward, N. S. (2001) Evaluation of a constructivist approach to student induction in relation to students’ learning styles. European Journal of Engineering Education. 26(4), 429–40. Edward, N. S., Middleton, J. (2002) The challenge of induction! Introducing engineering students to higher education: a task-oriented approach. Innovations in Education and teaching International. 39(1), 46–53. Edward, N. S. (2003) First impressions last: an innovative approach to induction. Active Learning in Higher Education. 4(3), 226–42. Edwards, A. (2005) Lets get beyond community and practice: the many meanings of learning by participating. Curriculum Journal. 16(1), 49–65. Eley, M., Meyer, J. (2004). Modelling the influences on learning outcomes of study processes in university mathematics. Higher Education. 47(4), 437–54. Elliot, J. (1991) Action research for educational change. Milton Keynes: Open University Press. Ellis, R., Calvo, R. (2006) Discontinuities in university student experiences of learning through discussions. British Journal of Educational Technology. 37(1), 55–68. Entwistle, N. (2005) Ways of thinking and ways of teaching across contrasting subject areas. ISL Conference. London.

The student learning experience in higher education 82

The Higher Education Academy 2008

Entwistle, N. J., Meyer, J. H. F., Tait, H. (1991) Students’ failure: Disintegrated perceptions of studying and the learning environment. Higher Education, 21, 249–61. EPPI-Centre (2003) EPPI-Centre keywording strategy for classifying education research version 0.9.7. [online]. Available from: http://eppi.ioe.ac.uk/EPPIWebContent/downloads/EPPI_Keyword_strategy_0.9.7.pdf [accessed January 2007]. Ewell, P. T., Jones, D. P. (1994) Data, indicators, and the National Center for Higher Education Management Systems. In: Borden, V. M. H., Banta, T. W. (eds.) Using performance indicators to guide strategic decision-making. San Francisco: JosseyBass. Fellenz, M. (2004) Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education. 29(6), 703–20. Fitzpatrick, J. (2006) An evaluative case study of the dilemmas experienced in designing a self-assessment strategy for Community Nursing students. Assessment and Evaluation in Higher Education. 31(1), 37–53. Forrester, G., Parkinson, G. (2004) Mind the gap: students’ expectations and perceptions of induction to distance learning in higher education. BERA. Manchester. Forrester, G., Motteram, G., Parkinson, G., Slaouti, D. (2005) Going the distance: students’ experiences of induction to distance learning in higher education. Journal of Further and Higher Education. 29(4), 293–306. Fransson, A. (1997) On qualitative differences in learning. IV – Effects of motivation and test anxiety on process and outcome. British Journal of Educational Psychology. 47, 244–57. Fuller, M., Healey, M., Bradley, A., Hall, T. (2004) Barriers to learning: a systematic study of the experience of disabled students in one university. Studies in Higher Education. 29(3), 303–18. Garvin, J., Butcher, A., Stefani, L., Tariq, V., Lewis, M., Blumson, N., Govier, R., Hill, J. (1995) Group projects for first-year university students: an evaluation. Assessment and Evaluation in Higher Education. 20(3), 273–88. Gaskin, S., Hall, R. (2002) Exploring London: a novel induction exercise for the new undergraduate. Journal of Geography in Higher Education. 6(2), 197–208. Gibbs, G. (1992) Improving the quality of student learning. Bristol: Technical and Educational Services.

The student learning experience in higher education 83

The Higher Education Academy 2008

Gibbs, G. (2003) Ten years of improving student learning. In: Rust, C. (ed.) Improving student learning. Theory and practice 10 years on. Oxford: Oxford Centre for Staff and Learning Development, 11–22 Gibbs, G. (1997) A teaching and learning strategy for higher education. Centre for Higher Education Practice, HEFCE. Gibbs, G., Lucas, L., Spouse, J. (1997) The effects of class size and form of assessment on nursing students’ performance, approaches to study and course perceptions. Nurse Education Today. 17(4), 311–18. Gibbs, G., Simpson, C. (2003) Measuring the response of students to assessment: the Assessment Experience Questionnaire. In: Rust, C. (ed.) Improving student learning. Theory and practice 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Gibbs, G, Simpson, C. (2004) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education. 1, 3–31. Gifford, C., Watt, P., Clark, W., Koster, S. (2005) Negotiating participation and power in a school setting: the implementation of active citizenship within the undergraduate sociology curriculum. LATISS – Learning and Teaching in the Social Sciences. 2(3), 175–90. Gosling, D., D’Andrea, V. (2001) Quality development: a new concept for higher education. Quality in Higher Education. 7(1), 7–17. Green, D., Brannigan, C., Mazelan, P., Giles, L. (1994) Measuring student satisfaction: A method of improving the quality of the student’s experience. In: S. Haselgrove (ed.) The student experience. Buckingham: SRHE/Open University Press. Gummesson, E. (2000) Qualitative methods in management research. London: Sage. Haggis, T. (2003) Building, tunnelling and waiting: mature students ‘speaking differently’ about learning. SCUTREA Annual Conference. Bangor. Haggis, T. (2003). Constructing images of ourselves? A critical investigation into ‘approaches to learning’ research in higher education. British Educational Research Journal. 29, 89 -104. Hall, J., Tinklin, T. (1998) Students first: the experiences of disabled students in higher education. SCRE Research Report 85. Edinburgh: Scottish Council for Research in Education.

The student learning experience in higher education 84

The Higher Education Academy 2008

Hall, R. (2003) Forging a learning community? A pragmatic approach to cooperative learning. Arts and Humanities in Higher Education. 2(2), 155–72. Hampel, R. (2006) Rethinking task design for the digital age: a framework for language teaching and learning in a synchronous on-line environment. ReCALL. 18(1), 105–21. Harker, M., Koutsantoni, D. (2005) Can it be as effective? Distance versus blended learning in a web-based EAP programme. ReCALL. 17(2), 197–216. Harrison, N. (2006) The impact of negative experiences, dissatisfaction and attachment on first year undergraduate withdrawal. Journal of Further and Higher Education. 30(4), 377–91. Harvey, L. (2005) A history and critique of quality evaluation in the UK. Quality Assurance in Education. 13(4), 263–76. Harvey, L., Drew, S., Smith, M. (2006). The first-year experience: a review of literature for the Higher Education Academy. York: Higher Education Academy. Harvey, L., Green, D. (1993) Defining quality. Assessment and Evaluation in Higher Education. 18(1), 9–34. Harvey, L., Knight, P. (1996) Transforming higher education. Buckingham: SRHE/Open University Press. Harvey, L., Burrows, A., Green, D. (1992) Criteria of Quality. Birmingham: UCE, QHE. Hatt, S., Hannan, A., Baxter, A. (2005) Bursaries and student success: a study of students from low-income groups at two institutions in the south west. Higher Education Quarterly. 59(2), 111–26. Healey, M. (1992) Curriculum development and ‘enterprise’: group work, resourcebased learning and the incorporation of transferable skills into a first year practical course. Journal of Geography in Higher Education. 16(1), 7–19. Healey, M. (2005) Linking research and teaching to benefit student learning. Journal of Geography in Higher Education. 29(2), 183–201. Healey, M., Matthews, H., Livingstone, I., Foster, I. (1996) Learning in small groups in university geography courses: Designing a core module around group projects. Journal of Geography in Higher Education. 20(2), 167–80. Higher Education Academy (HEA) (2005). Strategic plan 2005–2010. York: Higher Education Academy.

The student learning experience in higher education 85

The Higher Education Academy 2008

Higher Education Academy (HEA) (2006a) Invitation to tender. Academy reviews of research literature. [online]. Available from: https://www.heacademy.ac.uk/research/InvitationToTender2006.doc [accessed 12 May 2006]. Higher Education Academy (HEA) (2006b) The UK professional standards framework for teaching and supporting learning in higher education. York: Higher Education Academy. Higher Education Funding Council for England (HEFCE) (2003) Collecting and using student feedback on quality and standards of learning and teaching in higher education A report to the Higher Education Funding Council for England. [online]. Available from: http://www.hefce.ac.uk/Pubs/rdreports/2003/rd08_03 [accessed 19 February 2007]. Higher Education Funding Council for England (HEFCE) (2005) Changing the landscape of higher education. [online]. Available from: http://www.hefce.ac.uk/pubs/hefce/2005/annrev/ [accessed 20 February 2007]. Higher Education Funding Council for England (HEFCE) (2007) National Student Survey. [online]. Available from: http://www.hefce.ac.uk/learning/nss/ [accessed 20 February 2007]. Higgins, R. (2000) ‘Be more critical!’: Rethinking assessment feedback. BERA. Cardiff. Higher Educational Quality Council (1996) Guidelines on quality assurance. London: Higher Educational Quality Council. Hills, L. (2005) Evaluating the evaluation tools: methodological issues in the FAST project. BERA. Glamorgan. Hinett, K. (1998) The role of dialogue and self assessment in improving student learning. BERA. Belfast. Houston, M., Lebeau, Y. (2006) The social mediation of university learning. Working Paper 3. ESRC/Higher Education Academy. Hoyles, C., Noss, R. and Newman, K. (2001) Changing patterns of transition from school to university. International Journal of Mathematical Education in Science and Technology. 32(6), 829–45. Huber, M. T. & Morreale, S. P. (eds.) (2001) Disciplinary styles in the scholarship of teaching and learning: Exploring common ground. Melmont, CA: Carnegie Foundation/American Association for Higher Education.

The student learning experience in higher education 86

The Higher Education Academy 2008

Hughes, J., Slack, K., Baker, C. (2006) Learning journeys research project. BERA. Warwick. Hult, M., Lennung, S. (1980) Towards a definition of action research: a note and bibliography. Journal of Management Studies.17, 241–250. Ibrahim, M., Ogston, S., Crombie, I., Alhasso, D., Mukhopadhyay, S. (2006) Greater knowldege gain with structured than student-directed learning in child health: cluster randomized trial. Medical Teacher. 28(3), 239–43. Expert Information (2007) Index to Theses. [online]. Available from: www.theses.com [accessed 20 February 2007]. Ivanic, R. (1998) Writing and identity: the discoursal construction of identity in academic writing. Amsterdam: John Benjamins. James, D. (1997) Making the graduate: Assessment events as social practices. BERA. York. Jarvis, J., Woodrow, D. (2001). Learning preferences in relation to subjects of study of students in higher education. BERA. Leeds. Johnson, R. (1998) Higher Education in the Mirror: Reflections of context, relations, and values in the act of student feedback. Higher Education Close Up. Preston, University of Central Lancashire. Karagiannopoulou, E., Christodoulides, P. (2005) The impact of Greek university’s perceptions of their learning environment on approaches to studying and academic outcomes. International Journal of Educational Research. 43, 329–50. Kell, C., Deursen, R. (2002). Undergraduate curricula facilitate student learning profile development. BERA. Exeter. Kinchin, I., De-Leij, F., Hay, D. (2005) The evolution of a collaborative concept mapping activity for undergraduate microbiology students. Journal of Further and Higher Education. 29(1), 1–14. Knight, J. (2004) Comparison of student perception and performance in individual and group assessments in practical classes. Journal of Geography in Higher Education. 28(1), 63–81. Knox, H. (2005) Making the transition from further to higher education: the impact of a preparatory module on retention, progression and performance. Journal of Further and Higher Education. 29(2), 103–10.

The student learning experience in higher education 87

The Higher Education Academy 2008

Kolb, D. A. (1984). Experiential learning. Englewood Cliffs, NJ.: Prentice Hall. Kuh, G. (2002) The college student report. Indiana University: Center for Postsecondary Research and Planning. Kwan, K.-P. (1999) How fair are student ratings in assessing the teaching performance of university teachers? Assessment and Evaluation in Higher Education. 24(2), 181–95. Laing, C., Robinson, A., Johnston, V. (2005) Managing the transition into higher education: An on-line spiral induction programme. Active Learning in Higher Education. 6(3), 243–55. Lane, A., Hall, R., Lane, J. (2004) Self-efficacy and statistics performance among sports studies students. Teaching in Higher Education. 9(4), 435–48. Langan, M., Wheater, P., Shaw, E., Haines, B., Cullen, R., Boyle, J., Penney, D., Oldekop, J., Ashcroft, C., Lockey, L., Preziosi, R. (2005) Peer assessment of oral presentations: effects of student gender, university affiliation and participation in the development of assessment criteria. Assessment and Evaluation in Higher Education. 30(1), 21–34. Laurillard, D. (1997) Styles and approaches in problem-solving. In: F. Marton, D. Hounsell, N. Entwistle (eds.) The experience of learning. Edinburgh: Scottish Academic Press. Laurillard, D. (1998) Multimedia and the learner’s experience of narratives. Computers and Education. 31(2), 229–42. Laurillard, D. (2005) E-learning in higher education. In: P. Ashwin (ed.) Changing higher education. London: Routledge. Lave, J., Wenger, E. (1991). Situated learning: legitimate peripheral participation. Cambridge: Cambridge University Press. Lawless, C., Richardson, J. T. E. (2004) Monitoring the experiences of graduates in distance education. Studies in Higher Education. 29(3), 353–74. Lawrence, J., Ashford, K., Dent, P. (2006) Gender differences in coping strategies of undergraduate students and their impact on self-esteem and attainment. Active Learning in Higher Education. 7(3), 273–82. Lea, M., Street, B. (1998) Student writing and staff feedback in higher education: an academic literacies approach. Studies in Higher Education, 23(2), 157–72.

The student learning experience in higher education 88

The Higher Education Academy 2008

Leathwood, C. (2006) Gender, equity and the discourse of the independent learner in higher education. Higher Education. 52(4), 611–34. Lizzio, A., Wilson, K., Simons, R. (2002) University students’ perceptions of the learning environment and the academic outcomes: implications for theory and practice. Studies in Higher Education. 27, 27–52. Lomas, L. (2002) Does the development of mass education necessarily mean the end of quality? Quality in Higher Education. 8(1), 71–79. Long, M., Hillman, K. (2000) Course Experience Questionnaire 1999. In: Report prepared for the Graduate Careers Council of Australia. Canberra: Australian Government Department of Education, Science and Training. Lowe, H., Cook, A. (2003) Mind the gap: are students prepared for higher education? Journal of Further and Higher Education. 27(1), 53–76. Lucas, U., Cox, P., Croudace, C., Milford, P. (2004) ‘Who writes this stuff?’: students’ perceptions of their skills development. Teaching in Higher Education. 9(1), 55–68. Lyon, P., Hendry, G. (2002) The use of course experience questionnaire as a monitoring evaluation tool in a problem-based medical programme. Assessment and Evaluation in Higher Education. 27(4), 339–52. McCune, V. (2004) Development of first-year students’ conceptions of essay writing. Higher Education. 47(3), 257–82. McCune, V., Hounsell, D. (2005) The development of students’ ways of thinking and practising in three final-year biology courses. Higher Education. 49(3), 255–89. MacDonald, J. (2002) Developing competent e-learners: the role of assessment. Learning Communities and Assessment Cultures Conference. University of Northumbria. MacDonald, J. (2004) Developing competent e-learners: the role of assessment. Assessment and Evaluation in Higher Education. 29(2), 215–26. Malcolm, J., Zukas, M. (2001) Bridging pedagogic gaps: conceptual discontinuities in higher education. Teaching in Higher Education. 6, 33–42. McDowell, L., Mowl, G. (1995) Innovative assessment: its impact on students. In: G. Gibbs (ed.) Improving student learning through assessment and evaluation. Oxford: Oxford Centre for Staff Development.

The student learning experience in higher education 89

The Higher Education Academy 2008

McLean, M., Henson, Q., Hiles, L. (2003) The possible contribution of student drawings to evaluation in a new problem-based learning medical programme: a pilot study. Medical Education. 37(10), 895–906. MacMillan, J., McLean, M. (2005) Making first-year tutorials count: operationalising the assessment-learning connection. Active Learning in Higher Education. 6(2), 94–105. Maguire, S., Evans, S., Dyas, L. (2001) Approaches to learning: a study of first-year geography undergraduates. Journal of Geography in Higher Education. 25(1), 95– 107. Mann, S. (2001) Alternative perspectives on the student experience: alienation and engagement. Studies in Higher Education. 26(1), 7–19. Marsh, H. W. (1984) Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology. 76(5), 707–54. Marsh, H. W. (1987) Students’ evaluations of university teaching: research findings, methodological issues, and directions for future research. International Journal of Educational Research. 11(3), 253–388. Marsh, H. W. (in press) Students’ evaluations of university teaching: A multidimensional perspective. In: R. P. Perry and J. C. Smart (eds.) The scholarship of teaching and learning in higher education: An evidence based perspective. New York: Springer. Marsh, H. W., Overal, J. U. (1980) Validity of students’ evaluation of teaching effectiveness: Cognitive and affective criteria. Journal of Educational Psychology. 72(4), 468–75. Marsh, H. W., Rowe, K. J., Martin, A. (2002) PhD students’ evaluation of research supervision: issues, complexities, and challenges in a nationwide Australian experiment in benchmarking universities. Journal of Higher Education. 73(3), 313–48. Marton, F. (1981) Phenomenography: Describing conceptions of the world around us. Instructional Science. 10(2), 177–200. Marton, F. (1986) Phenomenography: A research approach to investigating different understandings of reality. Journal of Thought. 2(3), 28–49. Marton, F., Hounsell, D. J., Entwistle, N. J. (eds.) (1984). The experience of learning. Edinburgh: Scottish Academic Press. Marton, F., Säljö, R. (1976). On qualitative differences in learning: I. Outcome and process. British Journal of Educational Psychology. 46, 4–11.

The student learning experience in higher education 90

The Higher Education Academy 2008

May, S., et al. (2005) Feet under the table: students’ perceptions of the perceptions of learning support provided during their first year of study on health and social care programmes. SRHE. Edinburgh. Meyer, J. (1995) Gender group differences in the learning behaviour of entering first year university students. Higher Education. 29(1): 201–15. Meyer, J.H., Land, R. (2003) Threshold concepts and troublesome knowledge – linkages to ways of thinking and practicing. In: Rust, C. (ed.). Improving student learning. Theory and practice 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Miller, P. M. (1994) The first year at medical school: some findings and student perceptions. Medical Education. 28(1), 5–7. Monks, K., Conway, E., Dhuigneain, M. (2006) Integrating personal development and career planning: The outcomes for first year undergraduate learning. Active Learning in Higher Education. 7(1), 73–86. Morris, J. (2001). The conceptions of the nature of learning of first-year physiotherapy students and their relationship to students’ learning outcomes. Medical Teacher. 23(5), 503–07. Murphy, M. (2000) How the other half lives: a case study of successful and unsuccessful mature applicants in Irish higher education. SCUTREA Annual Conference. Nottingham. Murphy, M., Fleming, T. (1998) College knowledge: power, policy and the mature student experience at university. SCUTREA Annual Conference. Exeter. Murray, H. G., Rushton, J. P., Paunonen, S. V. (1990) Teacher personality traits and student instructional ratings in six types of university courses. Journal of Educational Psychology. 82(2), 250–61. Nardi, E. (1997) Didactical observations related to the teaching of advanced mathematics based on a study of first-year mathematics undergraduates’ difficulties in the encounter with mathematical abstraction. BERA. York. National Committee of Inquiry into Higher Education (1997). Higher education in the learning society: Report of the national committee. London: NCIHE. Nestel, D., Kidd, J. (2005) Peer assisted learning in patient-centred interviewing: the impact on student tutors. Medical Teacher. 27(5), 439–44.

The student learning experience in higher education 91

The Higher Education Academy 2008

Newlands, D., Ward, M. (1998) Using the web and e-mail as substitutes for traditional university teaching methods: student and staff experiences. University of Aberdeen. Newman, M. (2003) A pilot systematic review and meta-analysis on the effectiveness of Problem Based Learning. LTSN 01 Special Report 2, Newcastle-upon-Tyne: LTSN/ESRC. [online]. Available from: http://www.ltsn-01.ac.uk/resources/features/pbl [accessed 12 February 2007]. Nordvall, R. C., Braxton, J. M. (1996) An alternative definition of quality of undergraduate college education. Journal of Higher Education. 67(5), 483–97. Norris, S., Ennis, R. (1989) Evaluating critical thinking. Pacific Grove: Critical Thinking Press. Northedge, A. (2003) Enabling participation in academic discourse. Teaching in Higher Education. 8(2), 169–80. Norton, L. S., Owens, T., Clark, L. (2004). Analysing metalearning in first-year undergraduates through their reflective discussions and writing. Innovations in Education and Teaching International. 41(4): 423–41. O’Connor, M. (2003) Perceptions and experience of learning at university: What is it like for undergraduates? Research in Post-Compulsory Education. 8(1), 53–72. Orsini-Jones, M., Courney, K., Dickinson, A. (2005) Supporting foreign language learning for a blind student: a case study from Coventry University. Support for Learning. 20(3), 146–52. Orsmond, P., Merry, S., Reiling, K. (2004) Undergraduate project work: can directed tutor support enhance skills development? Assessment and Evaluation in Higher Education. 29(5), 625–42. Orsmond, P., Merry, S., Reiling, K. (2005) Biology students’ utilisation of tutors’ formative feedback: a qualitative interview study. Assessment and Evaluation in Higher Education. 30(4), 369–86. Pace, C. R. (1980) Measuring the quality of student effort. Current Issues in Higher Education. 2, 10–16. Pace, C. R. (1984) Measuring the quality of college student experiences. Los Angeles: Center for the Study of Evaluation, Graduate School of Education, University of California. Pace, C. R. (1985) The credibility of student self-reports. Los Angeles: Center for the Study of Evaluation, Graduate School of Education, University of California.

The student learning experience in higher education 92

The Higher Education Academy 2008

Pace, C R. (1987) CSEQ test manual and norms: College student experience questionnaire. Los Angeles: Center for the Study of Evaluation, Graduate School of Education, University of California. Papinczak, T., Young, L., Groves, M., Haynes, M. (2006) Effects of a metacognitive intervention on students’ approaches to learning and self-efficacy in a first year medical course. Advances in Health Science Education. [online]. Available at: http://www.springerlink.com/content/l72270974v544145/?p=e41394e8d1d94252a7006 98224f2ac1e&pi=0. Pascarella, E. (2001) Identifying excellence in undergraduate education. Change. 33(3). [online]. Available at: http:www.findarticles.com/p/arctiles/mi_m1254/is_3_33/ai_75506799. Perry, W. G. (1968) Forms of intellectual and ethical development in the college years. New York: Holt Rinehart and Winston Inc. Pitts, S. (2005) ‘Testing, testing...’: How do students use written feedback? Active Learning in Higher Education. 6(3), 218–29. Price, L., Richardson, J. (2003) Meeting the challenge of diversity: a cautionary tale about learning styles. In: C. Rust (ed.) Improving student learning theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Pring, R. (1992) Academic respectability and professional relevance: An inaugural lecture delivered before the University of Oxford 8th May. Oxford: Clarendon Press. Pring, R. (2000) Philosophy of educational research. London, New York: Continuum. Prosser, M. (2005) Why we shouldn’t use student surveys of teaching as satisfaction ratings. Higher Education Academy [online]. Available from: http://www.heacademy.ac.uk/research/Interpretingstudentsurveys.doc. Prosser, M., Hazel, E., Trigwell, K., Lyons, F. (1996) Qualitative and quantitative indicators of students’ understanding of physics concepts. Research and Development in Higher Education. 19, 670–75. Prosser, M., Ramsden, P., Trigwell, K., Martin, E. (2003) Dissonance in experience of teaching and its relation to the quality of student learning. Studies in Higher Education. 28(1), 37–48. Prosser, M., Trigwell, K. (1990) Student evaluations of teaching and courses: Student study strategies as a criterion of validity. Higher Education. 20, 135–42.

The student learning experience in higher education 93

The Higher Education Academy 2008

Prosser, M., Trigwell, K. (1999) Understanding learning and teaching: The experience of higher education. Buckingham: SRHE/Open University Press. Ramsden, P. (1991) A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Studies in Higher Education. 16, 129–50. Ramsden, P. (2003) Learning to teach in higher education. (2nd edition). London: Routledge Falmer. Ramsden, P., Entwistle, N. (1981) Effects of academic departments on students’ approaches to studying. British Journal of Educational Psychology. 51, 368–83. Rich, M., Brown, A. (2006) Underpinning students’ information literacy through the scholarship of teaching and learning. Higher Education Review. 38(2), 60–76. Richardson, J. T. E. (1991) A British evaluation of the course experience questionnaire. Studies in Higher Education. 19(1), 59–68. Richardson, J.T.E. (2000) Researching student learning: Approaches to studying in campus-based and distance education. Buckingham: SRHE/Open University Press. Richardson, J. T. E. (1994) A British evaluation of the course experience questionnaire. Studies in Higher Education. 19. Richardson, J. T. E. (2005a) Instruments for obtaining student feedback: a review of the literature. Assessment and Evaluation in Higher Education. 30(4), 387–415. Richardson, J. T. E. (2005b) Students’ perceptions of academic quality and approaches to studying in distance education. British Educational Research Journal. 31(1), 7–27. Richardson, J. T. E., Long, G. L., Woodley, A. (2003) Academic engagement and perceptions of quality on distance education. Open Learning. 18(3), 223–44. Robbins, T. L., DeNisi, A. S. (1994) Interpersonal affect and cognitive processing in performance appraisal: Towards closing the gap. Journal of Applied Psychology. 79, 341–50. Roberts, C., Lawson, M., Newble, D., Self, A., Chan, P. (2005) The introduction of large class problem-based learning into an undergraduate medical curriculum: an evaluation. Medical Teacher. 27(6), 527–33. Rowland, S. (2000) The enquiring university teacher. Buckingham: SRHE/Open University Press.

The student learning experience in higher education 94

The Higher Education Academy 2008

Rust, C., O’Donovan, B., Price, M. (2005) A social constructivist assessment process model: how the research literature shows us this could be best practice. Assessment and Evaluation in Higher Education. 30(3), 231–40. Saldo, G., Richardson, J. T. E. (2003) Approaches to studying and perceptions of the academic environment in students following problem-based and subject-based curricula. Higher Education Research and Development. 22(3), 253–74. Sander, P., Stevenson, K., King, M., Coates, D. (2000) University students’ expectations of teaching. Studies in Higher Education. 25(3), 309–23. Savin-Baden, M. (2000) Problem-based learning in higher education: Untold stories. Buckingham: SRHE/Open University Press. Savin-Baden, M., Wilkie, K. (eds.) (2004) Challenging research in problem-based learning. Maidenhead: SRHE/Open University Press. Scott, S., Issa, T. (2006 ) Lessons learned from using students’ feedback to inform academic teaching practice. 15th Annual Teaching Learning Forum. Experience of Learning. Perth. Seldin, P. (1988) Evaluating and developing administrative performance. San Francisco: Jossey-Bass. Selwyn, N., Marriott, N., Marriott P. (2000) Net gains or net pains? Business students’ use of the internet. Higher Education Quarterly. 54(2), 166–86. Shah, A., Treby, E. (2006) Using a community based project to link teaching and research: The Bourne Stream partnership. Journal of Geography in Higher Education. 30(1), 33–48. Shanahan, M., Meyer, J. (2001). A student learning inventory for economics based on the students’ experience of learning: A preliminary study. Journal of Economics Education. 32, 259–67. Sharpe, R., Benfield, G. (2005). The student experience of e-learning in higher education: a review of the literature. Brookes eJournal of Learning and Teaching. 1(3). Shepherd, J. (2006) Absence of elite distorts picture. [online]. Available from: http://www.thes.co.uk/story.aspx?story_id=2031982 [accessed 20 February 2007]. Slack, K., Casey, L. (2002) The best years of your life? Contrasting the local and nonlocal student experience in higher education. BERA. Exeter.

The student learning experience in higher education 95

The Higher Education Academy 2008

Smith, B. M. (2006) Quest for quality: The UK experience. New Directions for Higher Education. 133, 43–52. Smith, F. (2004a) ‘It’s not all about grades’: accounting for gendered degree results in geography at Brunel University. Journal of Geography in Higher Education. 28(2), 167–78. Smith, K. (2004b) School to university: an investigation into the experience of firstyear students of English at British universities. Arts and Humanities in Higher Education. 3(1), 81–93. Smith, K., Clegg, S., Lawrence, R., Todd, M.J. (2004) Fostering autonomy through work-based experiences: challenges for university educators and students. LATISS – Learning and Teaching in the Social Sciences. 1(3), 189–204 Smyth, K. (2004) The benefits of students learning about critical evaluation rather than being summatively judged. Assessment and Evaluation in Higher Education. 29(3), 369–78. Somervell, H., Allan, J. (1995) Self-assessment in two education studies modules at level 2/3 at the University of Wolverhampton. In: J. Allan et al. (ed.) Approaches to learning in higher education. Wolverhampton: University of Wolverhampton, Educational Research Unit, 6–12. Stenhouse, L (1980) Curriculum research and development in action. London: Heinmann Educational Books. Stokes, C. W., Cannavina, C., Cannavina, G. (2005) The state of readiness of student health professionals for web-based learning environments. Health Informatics Journal. 10(3), 195–204. Student Experience Special Interest Group of EAIR (2005) Report meeting. Riga, Latvia: EAIR. Surridge, P. (2006) National Student Survey 2005: Findings. [online]. Available from: http://www.hefce.ac.uk/pubs/rdreports/2006/rd22_06/ [accessed 20 February 2007]. Sutherland, P. (2003a) The adequacy of the study skills of a cohort of first-year nursing students: an investigation of attitudes. Journal of Adult and Continuing Education. 9(1), 22–31. Sutherland, P. (2003b) Case studies of the learning and study skills of first year students. Research in Post-Compulsory Education. 8(3), 425–40.

The student learning experience in higher education 96

The Higher Education Academy 2008

Svensson L. (1977) On qualitative differences in learning. III – Study skill and learning. British Journal of Educational Psychology. 47, 233–43 Sweet, J. (2006) Beyond reflection dogma. Professional Lifelong Learning: beyond reflective practice. Trinity and All Saints College, Leeds. Thomas, L. (2002a) Building social capital to improve student success. BERA. Exeter. Thomas, L. (2002b) Student retention in higher education: the role of institutional habitus. Journal of Education Policy. 17(4), 423–42. Thomson, K., Falchikov, N. (1998) ‘Full on until the sun comes out’: the effects of assessment on student approaches to studying. Journal of Assessment and Evaluation in Higher Education. 23(4), 379–90. Times Higher Education Supplement (2006) Times Higher Student Satisfaction Rating 2006. [online]. Available from: http://www.thes.co.uk/statistics/student_life/satisfaction_2006.aspx [accessed 10 February 2007]. TLRP (Teaching and Learning Research Programme) (2007) Projects: Higher Education. [online]. Available from: http://www.tlrp.org/proj/Higher.html [accessed 19 February 2007]. Todd, M., Bannister, P., Clegg, S. (2004) Independent inquiry and the undergraduate dissertation: perceptions and experiences of final-year social science students. Assessment and Evaluation in Higher Education. 29(3), 335–56. Topping, K. J., Watson, G. A., Jarvis, R. J., Hill, S. (1996) Same-year paired peer tutoring with first year undergraduates. Teaching in Higher Education. 1(3), 341–56. Trigwell, K. (2005) Teaching-research relations, cross-disciplinary collegiality and student learning. Higher Education. 49(3), 235–54. Trigwell, K., Ashwin, P. (2003) Undergraduate students’ experience of learning at the University of Oxford. Oxford: Institute for the Advancement of University Learning, University of Oxford. Trigwell, K., Ashwin, P (2006) An exploratory study of situated conceptions of learning and learning environments. Higher Education. 51(2), 243–58. Trotter, E. (2006) Student perceptions of continuous summative assessment. Assessment and Evaluation in Higher Education. 31(5), 505–22.

The student learning experience in higher education 97

The Higher Education Academy 2008

Trow, M. (1974) Problems in the transition from elite to mass higher education. In: OECD (ed.) Policies for higher education. General report on the conference on the future structures of post-secondary education. Paris: OECD, 55–101. Trow, M. (2006) Reflections on the transition from elite to mass to universal access: Forms and phases of higher education in modern societies since WWII. In: Forest, J., Altbach, P. (eds.) International handbook of higher education. Springer, 243–280. Turner, P. (2004) An analytical model for understanding undergraduate learning during placements and practical laboratory classes. BERA New Researchers/Student Conference. Manchester. University of Buckingham (2006) News: Buckingham’s success in National Student Survey 2006. [online]. Available from: http://www.buckingham.ac.uk/news/newsarchive2006/nss-results.html [accessed 19 February 2007]. Van Vught, F. A., Westerheijden, D. F (1994) Towards a general model of quality assessment in higher education. Higher Education. 28(3), 355–71. Vermunt, J. (2005) Relations between student learning patterns and personal and contextual factors and academic performance. Higher Education. 49(3), 205–34. Vygotsky, L.S. (1978). Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press. Waite, S., Davis, B. (2006) Collaboration as a catalyst for critical thinking in undergraduate research. Journal of Further and Higher Education. 30(4), 405–19. Ward, P., Jayanetti, C. (2006) The measure of contentment. [online]. Available from: http://www.thes.co.uk/story.aspx?story_id=2031982 [accessed 20 February 2007]. Weaver, M. (2006) Do students value feedback? Students’ perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education. 31(3), 379–94. Webb, G. (1997) Deconstructing deep and surface: Towards a critique of phenomenography. Higher Education. 33(2), 195–212. Wiers-Jenssen, J., Stensaker, B., Grogaard, J. B. (2002) Student satisfaction: Towards an empirical deconstruction of the concept. Quality Assurance in Education. 8(2), 183–95. Williams, E. (1992) Student attitudes towards approaches to learning and assessment. Assessment and Evaluation in Higher Education. 17(1), 45–58.

The student learning experience in higher education 98

The Higher Education Academy 2008

Wilson, K. L., Lizzio, A., Ramsden, P. (1997) The development, validation and application of the course experience questionnaire. Studies in Higher Education. 22(1), 33–53. Yorke, M. (2000) The quality of the student experience: what can institutions learn from data relating to non-completion. Quality in Higher Education. 6(1), 61–75.

The student learning experience in higher education 99

The Higher Education Academy 2008

Appendix: Items selected for detailed review Inventory-based studies Barrie, S., Ginns, P., Prosser, M. (2005) Early Impact and Outcomes of an Institutionally Aligned, Student Focused Learning Perspective on Teaching Quality Assurance. Assessment and Evaluation in Higher Education. 30(6), 641–656. Byrne, M., Flood, B., Willis, P. (2004) Using the student learning framework to explore the variation in academic performance of European business students. Journal of Further and Higher Education. 28(1), 67–78. Coffey, M., Gibbs, G. (2004) The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active Learning in Higher Education. 5(1), 87–100. Duff, A. (1996a) Approaches to learning in finance students: the effect of entry qualifications, gender and age. Paisley: Department of Economics and Management, University of Paisley. Duff, A. (1996b) Learning styles of UK accounting students: four studies of the reliability and validity of the Learning Styles Questionnaire. Paisley: Department of Economics and Management, University of Paisley. Duff, A. (2004) Understanding academic performance and progression of first-year accounting and business economics undergraduates: the role of approaches to learning and prior academic achievement. Accounting Education. 13(4), 409–30. Edward, N. S. (2001) Evaluation of a constructivist approach to student induction in relation to students’ learning styles. European Journal of Engineering Education. 26(4), 429–40. Eley, M., Meyer, J. (2004) Modelling the influences on learning outcomes of study processes in university mathematics. Higher Education. 47(4), 437–54. Ellis, R., Calvo, R. (2006) Discontinuities in university student experiences of learning through discussions. British Journal of Educational Technology. 37(1), 55–68. Entwistle, N. (2005) Ways of thinking and ways of teaching across contrasting subject areas. ISL Conference. London. Entwistle, N., Tait, H. (1990) Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments. Higher Education. 19, 169–94. Entwistle, N. J., Meyer, J. H. F., Tait, H. (1991) Students failure: Disintegrated perceptions of studying and the learning environment. Higher Education. 21, 249–261. Gibbs, G., Lucas, L., Spouse, J. (1997) The effects of class size and form of assessment on nursing students’ performance, approaches to study and course perceptions. Nurse Education Today. 17(4), 311–18. Gibbs, G., Coffey, M. (2004) The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active Learning in Higher Education. 5(1), 87–100. Jarvis, J., Woodrow, D. (2001) Learning preferences in relation to subjects of study of students in higher education. BERA. Leeds. Jones, C. B. (2001) Networked legal learning: an evaluation of the student learning experience. International Review of Law, Computers and Technology. 15(3), 317–29. Karagiannopoulou, E., Christodoulides, P. (2005) The impact of Greek university’s perceptions of their learning environment on approaches to studying and academic outcomes. International Journal of Educational Research. 43, 329–50. Kell, C., Deursen, R. (2002) Undergraduate curricula facilitate student learning profile development. BERA. Exeter. Kell, C., Jones, L. (2005) Teaching approaches: academic v work-placement. Do they need to match?. BERA. Glamorgan. Lawless, C., Richardson, J. T. E. (2004) Monitoring the experiences of graduates in distance education. Studies in Higher Education. 29(3), 353–374.

The student learning experience in higher education 100

The Higher Education Academy 2008

Lizzio, A., Wilson, K., Simons, R. (2002) University students’ perceptions of the learning environment and the academic outcomes: implications for theory and practice. Studies in Higher Education. 27, 27–52. Lyon, P., Hendry, G. (2002) The use of course experience questionnaire as a monitoring evaluation tool in a problem-based medical programme. Assessment and Evaluation in Higher Education. 27(4), 339–52. Maguire, S., Evans, S., Dyas, L. (2001) Approaches to learning: a study of first-year geography undergraduates. Journal of Geography in Higher Education. 25(1), 95–107. McCune, V., Hounsell, D. (2005) The development of students’ ways of thinking and practising in three final-year biology courses. Higher Education. 49: 255–89. Meyer, J. (1995) Gender group differences in the learning behaviour of entering first year university students. Higher Education. 29(1), 201–15. Meyer, J. H. F. (1991) Study orchestration: the manifestation, interpretation and consequences of contextualized approaches to studying. Higher Education. 22, 297–316. Morris, J. (2001) The conceptions of the nature of learning of first-year physiotherapy students and their relationship to students’ learning outcomes. Medical Teacher. 23(5), 503–07. Norton, L. S., Crowley, C. M. (1995) Can students be helped to learn how to learn? An evaluation of an ‘approaches to learning’ programme for first year degree students. Higher Education. 29(3), 307–28. Norton, L. S., Owens, T., Clark, L. (2004) Analysing metalearning in first-year undergraduates through their reflective discussions and writing. Innovations in Education and teaching International. 41(4), 423–41. Pheiffer, G., Holley, D. and Andrew, D. (2005) Developing thoughtful students: using learning styles in an HE context. Education and Training. 47(6), 422–31. Price, L., Richardson, J. (2003) Meeting the challenge of diversity: a cautionary tale about learning styles. In: C. Rust (ed.) Improving student learning theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Prosser, M., Ramsden, P., Trigwell, K., Martin, E. (2003) Dissonance in experience of teaching and its relation to the quality of student learning. Studies in Higher Education 28(1), 37–48. Prosser, M., Hazel, E., Trigwell, K., Lyons, F. (1996) Qualitative and quantitative indicators of students’ understanding of physics concepts. Research and Development in Higher Education. 19, 670– 675. Prosser, M., Trigwell, K. (1990) Student evaluations of teaching and courses: Student study strategies as a criterion of validity. Higher Education. 20, 135–42. Richardson, J. T. E. (1991) A British evaluation of the course experience questionnaire. Studies in Higher Education. 19(1), 59–68. Richardson, J.T.E. (2000) Researching student learning: Approaches to studying in campus-based and distance education. Buckingham: SRHE/Open University Press. Richardson, J. T. E. (2005b) Students’ perceptions of academic quality and approaches to studying in distance education. British Educational Research Journal. 31(1), 7–27. Richardson, J., Edmunds, R. (2007) A cognitive-developmental model of university learning. TLRP/HEA. Richardson, J. T. E., Long, G. L., Woodley, A. (2003) Academic engagement and perceptions of quality on distance education. Open Learning. 18(3), 223–44. Robbins, T. L., DeNisi, A. S. (1994) Interpersonal affect and cognitive processing in performance appraisal: Towards closing the gap. Journal of Applied Psychology. 79, 341–50. Shanahan, M., Meyer, J. (2001) A Student Learning Inventory for Economics Based on the Students’ Experience of Learning: a Preliminary Study. Journal of Economics Education. 32, 259–67. Saldo, G., Richardson, J. T. E. (2003) Approaches to studying and perceptions of the academic environment in students following problem-based and subject-based curricula. Higher Education Research and Development. 22(3), 253–74. Sutherland, P. (2003) The adequacy of the study skills of a cohort of first-year nursing students: an investigation of attitudes. Journal of Adult and Continuing Education. 9(1), 22–31.

The student learning experience in higher education 101

The Higher Education Academy 2008

Tooth, D., Tonge, K., McManus, I.C. (1989) Anxiety and study methods in preclinical students: causal relation to examination performance. Medical Education. 23(5), 416–21. Trigwell, K. (2005) Teaching-research relations, cross-disciplinary collegiality and student learning. Higher Education. 49(3), 235–54. Trigwell, K., Ashwin, P. (2003) Undergraduate students’ experience of learning at the University of Oxford. Oxford: Institute for the Advancement of University Learning, University of Oxford. Trigwell, K., Ashwin, P (2006) An exploratory study of situated conceptions of learning and learning environments. Higher Education. 51(2), 243–58. Vermunt, J. (2005) Relations between student learning patterns and personal and contextual factors and academic performance. Higher Education. 49(3), 205–34. Williams, E. (1992) Student attitudes towards approaches to learning and assessment. Assessment and Evaluation in Higher Education. 17(1), 45–58. Wilson, K. L., Lizzio, A., Ramsden, P. (1997) The development, validation and application of the course experience questionnaire. Studies in Higher Education. 22(1), 33–53.

Action-based research Bulpitt, H. M., Peter, J. (2005) Learning about reflection from the student. Active Learning in Higher Education. 6(3), 207–17. Chang, H. (2005) Turning an undergraduate class into a professional research community. Teaching in Higher Education. 10(3), 387–94. Clouder, L. (2005) Caring as a ‘threshold concept’: transforming students in higher education into health(care) professionals. Teaching in Higher Education. 10(4), 505–17. Cowan, J., Creme, P. (2005) Peer assessment or peer engagement? Students as readers of their own work. LATISS – Learning and Teaching in the Social Sciences. 2(2), 99–119. Egan, B. (1999) Effective teaching and the ineffective study. BERA. University of Sussex. Gibbs, G. (1992) Improving the quality of student learning. Bristol: Technical and Educational Services. Hockings, C. (2001) Evaluating alternative strategies for the development of mathematical thinking amongst undergraduate business studies students within the context of operations management. BERA. Leeds. Hockings, C. (2004) Innovative teaching and learning - who benefits? A case study. BERA. Manchester. Kinchin, I., De-Leij, F., Hay, D. (2005) The evolution of a collaborative concept mapping activity for undergraduate microbiology students. Journal of Further and Higher Education. 29(1), 1–14. Latham, A., Gilbert, J. (1995) Hierarchical teams: action research project. Improving student learning through assessment and evaluation. Oxford: Oxford Centre for Staff Development. Prosser, M., Ramsden, P., Trigwell, K., Martin, E. (2003) Dissonance in experience of teaching and its relation to the quality of student learning. Studies in Higher Education. 28(1), 37–48. Rich, M., Brown, A. (2006) Underpinning students’ information literacy through the scholarship of teaching and learning. Higher Education Review. 38(2), 60–76. Waite, S., Davis, B. (2006) Collaboration as a catalyst for critical thinking in undergraduate research. Journal of Further and Higher Education. 30(4), 405–19. Wood, G., Campbell, M. (1995) Using a record of professional development to improve student learning. In: G. Gibbs (ed.) Improving student learning through assessment and evaluation. Oxford: Oxford Centre for Staff Development.

Induction and transition Armstrong, P. K., Croft, A. C. (1999) Identifying the learning needs in mathematics of entrants to undergraduate engineering progammes in and English university. European Journal of Engineering Education. 24(1), 59–71.

The student learning experience in higher education 102

The Higher Education Academy 2008

Ballinger, G. J. (2002) Bridging the gap between A level and degree. Some observations on managing the transitional stage in the study of English literature. Arts and Humanities in Higher Education. 2(1), 99–109. Baxter, A., Hatt, S. (2000) ‘Everything must go!’ Clearing and first-year performance. Journal of Further and Higher Education. 21(1), 5–14. Blicharski , J. R. D. (1999) New undergraduates: access and helping them prosper. Widening Participation and Lifelong Learning. 1(1), 34–40. Booth, A. (1997) Listening to students: experiences and expectations in the transition to a history degree. Studies in Higher Education. 22(2), 205–20. Booth, A. (2001) Developing history students’ skills in the transition to university. Teaching in Higher Education. 6(4), 487–503. Brown, A., Moerkamp, T., Voncken, E. (1999) Facilitating progression to higher education from vocational paths. European Journal of Education. 34(2), 219–35. Byrne, M., Flood, B. (2005) A study of accounting students’ motives, expectations and preparedness for higher education. Journal of Further and Higher Education. 29(2), 111–24. Cook, A., Leckey, J. (1999) Do expectations meet reality? A survey of changes in first-year student opinion. Journal of Further and Higher Education. 23(2), 157–71. Cox, W. (2000) Predicting the mathematical preparedness of first-year undergraduates for teaching and learning purposes. International Journal of Mathematical Education in Science and Technology. 31(2), 227–48. Dalton, R. T. (2001) What do they bring with them? The fieldwork experiences of undergraduates on entry into higher education. Journal of geography in higher education.. 25(3), 379–93. Durkin, K., Main, A. (2002) Discipline-based study skills support for first-year undergraduate students. Active Learning in Higher Education. 3(1), 24–39. Edward, N., Middleton, J. (1997) Induction - a contextual approach to the start of engineers’ formation. Scottish Educational Research Association Annual Conference. Dundee. Edward, N. S. (2001) Evaluation of a constructivist approach to student induction in relation to students’ learning styles. European Journal of Engineering Education. 26(4), 429–40. Edward, N. S., Middleton, J. (2002) The challenge of induction! Introducing engineering students to higher education: a task-oriented approach. Innovations in Education and teaching International. 39(1), 46–53. Edward, N. S. (2003) First impressions last: an innovative approach to induction. Active Learning in Higher Education. 4(3), 226–42. Fazey, D., Fazey, J. (2001) The potential for autonomy in learning: perceptions of competence, motivation and locus of control in first-year undergraduate students. Studies in Higher Education. 26(3), 345–61. Forrester, G., Parkinson, G. (2004) Mind the gap: students’ expectations and perceptions of induction to distance learning in higher education. BERA. Manchester. Forrester, G., Motteram, G., Parkinson, G., Slaouti, D. (2005) Going the distance: students’ experiences of induction to distance learning in higher education. Journal of Further and Higher Education. 29(4), 293–306. Gaskin, S., Hall, R. (2002) Exploring London: a novel induction exercise for the new undergraduate. Journal of geography in higher education.. 6(2), 197–208. Harrison, N. (2006) The impact of negative experiences, dissatisfaction and attachment on first year undergraduate withdrawal. Journal of Further and Higher Education. 30(4), 377–91. Hoyles, C., Noss, R., Newman, K. (2001) Changing patterns of transition from school to university. International Journal of Mathematical Education in Science and Technology. 32(6), 829–45. Knox, H. (2005) Making the transition from further to higher education: the impact of a preparatory module on retention, progression and performance. Journal of Further and Higher Education. 29(2), 103–10. Laing, C., Robinson, A., Johnston, V. (2005) Managing the transition into higher education: An on-line Spiral Induction Programme. Active Learning in Higher Education. 6(3), 243–55. Lowe, H., Cook, A. (2003) Mind the gap: are students prepared for higher education? Journal of Further and Higher Education. 27(1), 53–76.

The student learning experience in higher education 103

The Higher Education Academy 2008

May, S., et al. (2005) Feet under the table: students’ perceptions of the perceptiosn of learning support provided during their first year of study on health and social care programmes. SRHE. Edinburgh. Murphy, M. (2000) How the other half lives: a case study of successful and unsuccessful mature applicants in Irish higher education. SCUTREA Annual Conference. Nottingham. Nardi, E. (1997) Didactical observations related to the teaching of advanced mathematics based on a study of first-year mathematics undergraduates’ difficulties in the encounter with mathematical abstraction. BERA. York. Norton, L. S., Crowley, C. M. (1995) Can students be helped to learn how to learn? An evaluation of an ‘Approaches to Learning’ programme for first year degree students. Higher Education. 29(3), 307–28. Roberts, J. (1995) Promoting skilled studying: attitudes and orientations to study of first year students on a modular degree programme. In: J. Allan et al. (eds) Approaches to learning in higher education. Wolverhampton.: University of Wolverhampton, 22–29. Smith, K. (2004) School to university: an investigation into the experience of first- year students of English at British universities. Arts and Humanities in Higher Education. 3(1), 81–93. Smith, K., Todd, M. (2005) Mapping and understanding models of support for first year social science students phase one report. York: Centre for Sociology, Anthropology and Politics (C-SAP) Higher Education Academy. Tait, H., Godfrey, H. (2001) Enhancing the student experience for direct entrants to the penultimate year of undergraduate degree programmes. Journal of Further and Higher Education. 25(2), 259–65.

Teaching, curriculum and learning environments Abdel Wahab, M. M., Mahboub, D. (2006) Students understanding and participation using ULearn: dynamics case. European Journal of Engineering Education. 31(4), 207–20. Allen, V. (2005) A reflection on teaching law to business students. SRHE Conference. Edinburgh. Anderson, C., Day, K. (2005) Purposive environments: engaging students in the values and practices of history. Higher Education. 49(3), 319–44. Bamber, J., Galloway, V., Tett, L. (2006) Widening participation and meta-learning: risking less in HE. Journal of Adult and Continuing Education. 12(1), 20–33. Beveridge, I. (1997) Teaching your students to think reflectively: the case for reflective journals. Teaching in Higher Education. 2(1), 33–43. Bourner, J., Hughes, M., Bourner, T. (2001) First-year undergraduate experiences of group work project. Assessment and Evaluation in Higher Education. 26(1), 19–39. Boyle, T., Bradley, C., Chalk, P., Fisher, K., Pickard, P. (2005) Introducing a virtual learning environment and learning objects into higher education courses. International Journal of Learning Technology. 1(4), 383–98. Brooksbank, D. J., Clark, A, Hamilton, R., Pickernell, D. G. (1998) Views from the trenches: lessons from the introduction of WinEcon into a first year undergraduate programme. Computers in Higher Education Economics Review. 12(1), 13–18. Bulpitt, H. M., Peter J. (2005) Learning about reflection from the student. Active Learning in Higher Education. 6(3), 207–17. Burt, R. (2004) A preliminary analysis of a music college as a learning culture. Professional Learning in a Changing Society Conference. Oslo. Carver, S., Evans, A., Kingston, R. (2004) Developing and testing an online tool for teaching GIS concepts applied to spatial decision-making. Journal of Geography in Higher Education. 28(3), 425–38. Cartney, P., Rouse, A. (2006) The emotional impact of learning in small groups: highlighting the impact on student progression and retention. Teaching in Higher Education. 11(1), 79–91.

The student learning experience in higher education 104

The Higher Education Academy 2008

Clarke, K., Lane, A. (2005) Seminar and tutorial sessions: a case study evaluating relationships with academic performance and student satisfaction. Journal of Further and Higher Education. 29(1), 15–23. Crook, C., Barrowcliff, D. (2001) Ubiquitous computing on campus: Patterns of engagement by university students. International Journal of Human-computer Interaction. 13(2), 245–56. Culwin, F. (2006) An active introduction to academic misconduct and the measured demographics of misconduct. Assessment and Evaluation in Higher Education. 31(2), 167–82. Davies, I., Hogarth, S. (2002) Evaluating Educational Studies. Evaluation and Research in Education. 16(2), 82–94. De Vita, G. (2004) Integration and independent learning in a business synoptic module for international credit entry students. Teaching in Higher Education. 9(1), 69–81. Egan, B. (1999) Effective teaching and the ineffective study. BERA. University of Sussex. Entwistle, N. (1992) The impact of teaching on learning outcomes in higher education: a literature review. Sheffield: Committee of Vice-Chancellors and Principals of the Universities of the United Kingdom, Universities’ Staff Development Unit. Entwistle, N. (2005) Ways of thinking and ways of teaching across contrasting subject areas. ISL Conference. London. Entwistle, N., Peterson, E. (2004) Conceptions of learning and knowledge in higher education: relationships with study behaviour and influences of learning environments. International Journal of Educational Research. 41(6), 407–28. Entwistle, N., Tait, H. (1990) Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments. Higher Education. 19: 169–94. Fill, K. (2005) Student-focused evaluation of elearning activities. ECER. Dublin. Fill, K. (2006) Quality versus time: a rationale for blending learning? BERA. Warwick. Finkelstein, N. (2005) Learning physics in context: a study of student learning about electricity and magnetism. International Journal of Science Education. 27(10), 1187–1209. Frame, B. (1997) ‘Reading for Information’: How student teachers react to an interactive teaching approach. Scottish Educational Research Association Annual Conference. Dundee. Franz, J. F. (1996) Students’ and lecturers’ conceptions in context: an interdisciplinary study. Teaching in Higher Education. 1(2), 325–40. Garvin, J., Butcher, A., Stefani, L., Tariq, V., Lewis, M., Blumson, N., Govier, R., Hill, J. (1995) Group projects for first-year university students: an evaluation. Assessment and Evaluation in Higher Education. 20(3), 273–88. Gibbs, G. (1992) Improving the quality of student learning. Bristol: Technical and Educational Services. Gifford, C., Watt, P., Clark, W., Koster, S. (2005) Negotiating participation and power in a school setting: the implementation of active citizenship within the undergraduate sociology curriculum. LATISS – Learning and Teaching in the Social Sciences. 2(3), 175–90. Hall, R. (2002) Aligning learning, teaching and assessment using the web: an evaluation of pedagogic approaches. British Journal of Educational Technology. 33(2), 149–58. Hall, R. (2003) Forging a learning community? A pragmatic approach to cooperative learning. Arts and Humanities in Higher Education. 2(2), 155–72. Hampel, R. (2006) Rethinking task design for the digital age: a framework for language teaching and learning in a synchronous online environment. ReCALL. 18(1), 105–21. Harker, M., Koutsantoni, D. (2005) Can it be as effective? Distance versus blended learning in a webbased EAP programme. ReCALL. 17(2), 197–216. Healey, M. (1992) Curriculum development and ‘enterprise’: group work, resource-based learning and the incorporation of transferable skills into a first year practical course. Journal of Geography in Higher Education. 16(1), 7–19. Healey, M. (2005) Linking research and teaching to benefit student learning. Journal of Geography in Higher Education. 29(2), 183–201. Healey, M., Matthews, H., Livingstone, I., Foster, I. (1996) Learning in small groups in university geography courses: Designing a core module around group projects. Journal of Geography in Higher Education. 20(2), 167–80.

The student learning experience in higher education 105

The Higher Education Academy 2008

Hilton, J., Challinor, M. (1995) Developing student learning autonomy via an outcome based multimodal delivery strategy. Higher Education for Capability Conference. UMIST Manchester. Hockings, C. (2001) Evaluating alternative strategies for the development of mathematical thinking amongst undergraduate business studies students within the context of operations management. BERA. Leeds. Huxham, M. (2005) Learning in lectures: Do ‘interactive windows’ help? Active Learning in Higher Education. 6(1), 17–31. Ibrahim, M., Ogston, S., Crombie, I., Alhasso, D., Mukhopadhyay, S. (2006) Greater knowldege gain with structured than student-directed learning in child health:cluster randomized trial. Medical Teacher. 28(3), 239–43. Jones, R. (2006) A higher education ethos: a review of information and literature relating to the creation of an ethos of higher education in the context of further education. Journal of Widening Participation and Lifelong Leaning. 8(2). Jones, R., Cooke, L. (2006) A window into learning: case studies of online group communication and collaboration. Alt-J. 14(3), 261–74. Knowles, G., Gilbert, S., Lansdell, J., Lloyd, M. (1999) Holding onto critical and reflective practice in primary undergraduate teacher education. BERA. University of Sussex. Lally, V. (1999) Towards generic teaching and learning strategies through computer based collaborative group work progress and discussion. ECER. Lahti, Finland. Laurillard, D. (1992) Phenomenographic research and the design of diagnostic strategies for adaptive tutoring systems. In: M. Jones, P. Winne (ed.) Adaptive learning environments. Berlin: Springer-Verlag. Laurillard, D. (1998) Multimedia and the learner’s experience of narratives. Computers and Education. 31(2), 229–42. Lord, D. (1998) ICT supported multimedia learning materials: catering for individual learner differences. BERA. Belfast. Love, N., Fry, N. (2006) Accounting students’ perceptions of a virtual learning environment: springboard or safety net? Accounting Education. 15(2), 151–66. Lucas, U. (1998) Accounting for the world and the world of accounting: phenomenographic research in accounting education. Higher Education Close Up Conference. University of Central Lancashire. McClune, B., Murphy, C. (2001) Medics in Primary Schools (MIPS) project: an evaluation report. Belfast: Queen’s University. McCune, V., Hounsell, D. (2005) The development of students’ ways of thinking and practising in three final-year biology courses. Higher Education. 49: 255–89. McGill, L. Nicol, D., Littlejohn, A., Grierson, H., Juster, N., Ion, W. J. (2005) Creating an information-rich learning environment to enhance design student learning: challenges and approaches. British Journal of Educational Technology. 36(4), 629–42. Mann, S. (2003) Inquiring into a higher education classroom: insights into the different perspectives of teacher and students. In: Improving student learning theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Martin, K. (2003) Student support in work based learning: interpreting cultural differences. Scottish Educational Research Association Annual Conference. Perth. Monks, K., Conway, E., Dhuigneain, M. (2006) Integrating personal development and career planning: The outcomes for first year undergraduate learning. Active Learning in Higher Education. 7(1), 73–86. Morris, P., O’Neill, F. (2006) Preparing for patient-centred practice: developing the patient voice in health professional learning. Professional Lifelong Learning: beyond reflective practice. Trinity and All Saints College, Leeds. Nestel, D., Kidd, J. (2005) Peer assisted learning in patient-centred interviewing: the impact on student tutors. Medical Teacher. 27(5), 439–44. Newlands, D., Ward, M. (1998) Using the web and e-mail as substitutes for traditional university teaching methods: student and staff experiences. University of Aberdeen.

The student learning experience in higher education 106

The Higher Education Academy 2008

Newman, M. (2003) A pilot systematic review and meta-analysis on the effectiveness of problem based learning. Newcastle-upon-Tyne: LTSN/ESRC. Orsmond, P., Merry, S., Reiling, K. (2004) Undergraduate project work: can directed tutor support enhance skills development? Assessment and Evaluation in Higher Education. 29(5), 625–42. Railton, D., Watson, P. (2005) Teaching autonomy: Reading groups and the development of autonomous learning practices. Active Learning in Higher Education. 6(3), 182–93. Reeves, S., Freeth, D., McCrorie, P., Perry, D. (2002) ‘It teaches you what to expect in the future...’: Interprofessional learning on a training ward for medical, nursing, occupational therapy and physiotherapy students. Medical Education. 36(4), 337. Roberts, C., Lawson, M., Newble, D., Self, A., Chan, P. (2005) The introduction of large class problembased learning into an undergraduate medical curriculum: an evaluation. Medical Teacher. 27(6), 527–33. Rogers, G. (2004) History, learning technology and student achievement: Making the difference? Active Learning in Higher Education. 5(3), 232–47. Sanders, P., Stevenson, K., King, M., Coates, D. (2000) University students’ expectations of teaching. Studies in Higher Education. 25(3), 309–23. Seddon, G., Dedrosa, M. (1988) A comparison of student’s explanations derived from spoken and written methods of questioning and answering. International Journal of Science Education. 10(3), 337–42. Selwyn, N., Marriott, N., Marriott P. (2000) Net gains or net pains? Business students’ use of the internet. Higher Education Quarterly. 54(2), 166–86. Sharpe, R., Benfield, G. (2005) The student experience of e-learning in higher education: a review of the literature. Brookes eJournal of Learning and Teaching. 1(3) Smith, K., Clegg, S., Lawrence, R., Todd, M.J. (2004) Fostering autonomy through work-based experiences: challenges for university educators and students. LATISS – Learning and Teaching in the Social Sciences. 1(3), 189–204. Sweet, J. (2006) Beyond reflection dogma. Professional Lifelong Learning: beyond reflective practice. Trinity and All Saints College, Leeds. Taylor, R. (2005) Creating a connection: tackling student attrition through curriculum development. Journal of Further and Higher Education. 29(4), 367–74. Topping, K. J., Watson, G. A., Jarvis, R. J., Hill, S. (1996) Same-year paired peer tutoring with first year undergraduates. Teaching in Higher Education. 1(3), 341–56.

Student perceptions of learning Ashwin, P. (2003) Variation in students’ experience of small group tutorials. In: Rust, C. (ed.) Improving student learning. Theory and practice 10 years on. 251–56. Ashwin, P. (2005) Variation in students’ experiences of the ‘Oxford tutorial’. Higher Education. 50(4), 631–44. Baderin, M. (2005) Towards improving students’ attendance and quality of undergraduate tutorials: a case study on law. Teaching in Higher Education. 10(1), 435–48. Bamber, J., Tett, L. (2000) Transforming the learning experiences of nontraditional students: a perspective from higher education. Studies in Continuing Education. 22(1), 57–75. Bamber, J., Tett, L. (2001) Ensuring integrative learning experiences for nontraditional students in higher education. Widening Participation and Lifelong Learning. 3(1), 8–16. Bamber, J., Galloway, V., Tett, L. (2006) Widening participation and meta-learning: risking less in HE. Journal of Adult and Continuing Education. 12(1), 20–33. Beard, C. (2005) Student achievement: The role of emotions in motivation to learn - emotional maps. Sheffield: Higher Education Academy/Sheffield Hallam University. Bekhradnia, B., Whitnall, C., Sastry, T. (2006) The academic experience of students in English universities. Oxford: HEPI. Borland, J., James, S. (1999) The learning experience of students with disabilities in higher education. a case study of a UK university. Disability and Society. 14(1), 85–101.

The student learning experience in higher education 107

The Higher Education Academy 2008

Bowl, M. (2003) Non-traditional entrants to higher education: They talk about people like me. Stoke on Trent: Trentham Books. Bradbeer, J., Healey, M., Kneale, P. (2004) Undergraduate geographers’ understandings of geography, learning and teaching: a phenomenographic study. Journal of Geography in Higher Education. 28(1), 17–34. Burke, V., Jones, I., Doherty, M. (2005) Analysing student perceptions of transferable skills via undergraduate degree programmes. Active Learning in Higher Education. 6(2), 132–44. Carney, C., McNeish, S. (2005) Listening to the needs of the mature student - a qualitative survey. Widening Participation and Lifelong Learning. 7(3), 16–23. Castles, J. (2004) Persistence and the adult learner: factors affecting persistence in Open University Students. Active Learning in Higher Education. 5(2), 166–79. Concannon, F., Flynn, A., Campbell, M. (2005) What campus-based students think about the quality and benefits of e-learning. British Journal of Educational Technology. 36(3), 501–12. Cooke, R., Barkham, M., Audin, K., Bradley, M., Davy, J. (2004) How social class differences affect students’ experiences of university. Journal of Further and Higher Education. 28(4), 407–21. Crozier, W. R., and Garbert-Jones, A. (1996) Finding a voice: shyness in mature students’ experience of university. Adults Learning. 7(8), 195–98. Cullingford, C., Crowther, K. (2005) Student learning and mathematics in higher education. Higher Education Review. 37(3), 33–43. Curtis, S., Shani, N. (2002) The effect of taking paid employment during termtime on students’ academic studies. Journal of Further and Higher Education. 26(2), 129–38. Davies, I., Hogarth, S. (2002) Evaluating Educational Studies. Evaluation and Research in Education. 16(2), 82–94. Davies, I., Hogarth, S. (2004) Perceptions of Educational Studies. Educational Studies. 30(4), 425–39. Davies, J., Graff, M. (2005) Performance in e-learning: online participation and student grades. British Journal of Educational Technology. 36(4), 657–63. Drake, P. (2005) Undergraduate mathematics diversified for non-standard entrants - whatever next! a case of teaching assistant and the curriculum. SCUTREA, University of Sussex. Drew, L., Bailey, S., Shreeve, A. (2000) Phenomenographic research: methodological issues arising from a study investigating student approaches to learning in fashion design. ECER. Edinburgh. Drew, S. (1998) Students’ perceptions of their learning outcomes. Teaching in Higher Education. 3(2), 197–217. Entwistle, N., Peterson, E. (2004) Conceptions of learning and knowledge in higher education: relationships with study behaviour and influences of learning environments. International Journal of Educational Research. 41(6), 407–28. Fill, K. (2006) Quality versus time: a rationale for blending learning? BERA. Warwick. Forsyth, A., Furlong, A. (2003) Losing out? Socioeconomic disadvantage and experience in further and higher education. Bristol: Policy Press. Franz, J., Ferreira, L (1996) Students’ and lecturers’ conceptions in context: an interdisciplinary study. Teaching in Higher Education. 1(2), 325–40. Fuller, M., Healey, M., Bradley, A., Hall, T. (2004) Barriers to learning: a systematic study of the experience of disabled students in one university. Studies in Higher Education. 29(3), 303–18. Haggis, T. (2003) Building, tunnelling and waiting: mature students speaking differently about learning. SCUTREA annual conference. Bangor. Hall, J., Tinklin, T. (1998) Students first: the experiences of disabled students in higher education. Edinburgh: Scottish Council for Research in Education. Hatt, S., Hannan, A., Baxter, A. (2005) Bursaries and student success: a study of students from lowincome groups at two institutions in the south west. Higher Education Quarterly. 59(2), 111–26. Hounsell, D., McCune, V. (2003) Students’ experiences of learning to present. In: C. Rust (ed.) Improving student learning: Theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Houston, M., Lebeau, Y. (2006) The social mediation of university learning. ESRC/HEA. Hughes, J., Slack, K., Baker, C. (2006) Learning Journeys research project. BERA. Warwick.

The student learning experience in higher education 108

The Higher Education Academy 2008

Humphrey, R. (2006) Pulling structured inequality into higher education: the impact of part-time working on English university students. Higher Education Quarterly. 60(3), 270–86. Lane, A., Hall, R., Lane, J. (2004) Self-efficacy and statistics performance among sports studies students. Teaching in Higher Education. 9(4), 435–48. Laurillard, D. (1997) Styles and approaches in problem-solving. In: F. Marton, D. Hounsell, N., Entwistle (eds.) The experience of learning. Edinburgh: Scottish Academic Press. Lawrence, J., Ashford, K., Dent, P. (2006) Gender differences in coping strategies of undergraduate students and their impact on self-esteem and attainment. Active Learning in Higher Education. 7(3), 273–82. Lea, S. J., Stephenson, D., Troy, J. (2003) Higher education students’ attitudes to student-centred learning: beyond ‘educational bulimia’? Studies in Higher Education. 28(3), 321–34. Leathwood, C. (2006) Gender, equity and the discourse of the independent learner in higher education. Higher Education. 52(4), 611–34. Lillis, T., Grainger, K. (1998) Exploring the socio-discursive space of higher education: focus on student-tutor talk. Higher Education Close Up Conference. University of Central Lancashire. Lucas, U. (1998) Accounting for the world and the world of accounting: phenomenographic research in accounting education. Higher Education Close Up Conference. University of Central Lancashire. Lucas, U., Cox, P., Croudace, C., Milford, P. (2004) ‘Who writes this stuff?’: students’ perceptions of their skills development. Teaching in Higher Education. 9(1), 55–68. Mann, S. (2003) Inquiring into a higher education classroom: insights into the different perspectives of teacher and students. In: Improving student learning theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. McCune, V. (2004) Development of first-year students’ conceptions of essay writing. Higher Education. 47(3), 257–82. Miller, P. M. (1994) The first year at medical school: some findings and student perceptions. Medical Education. 28(1), 5–7. Murphy, M., Fleming, T. (1998) College knowledge: power, policy and the mature student experience at university. SCUTREA Annual Conference. Exeter. Neill, N., Mulholland, G., Ross, V., Leckey, J. (2004) The influence of part-time work on student placement. Journal of Further and Higher Education. 28(2), 123–37. O’Connor, M. (2003) Perceptions and Experience of Learning at University: What is it like for undergraduates? Research in Post-Compulsory Education. 8(1), 53–72. Orsini-Jones, M., Courney, K., Dickinson, A. (2005) Supporting foreign language learning for a blind student: a case study from Coventry University. Support for Learning. 20(3), 146–52. Raddon, A. (2006) Absence as opportunity: learning outside the institutional space and time. Journal of Further and Higher Education. 30(2), 157–67. Rhodes, C., Nevill, A. (2004) Academic and social integration in higher education: a survey of student satisfaction and dissatisfaction within a first-year education studies cohort at a new university. Journal of Further and Higher Education. 28(2), 179–93. Robotham, D. (2004) Using interviews in researching student learning: a true and valid account? Teaching in Higher Education. 9(2), 225–33. Sanders, P., Stevenson, K., King, M., Coates, D. (2000) University students’ expectations of teaching. Studies in Higher Education. 25(3), 309–23. Selwyn, N., Marriott, N., Marriott P. (2000) Net gains or net pains? Business students’ use of the internet. Higher Education Quarterly. 54(2), 166–86. Slack, K., Casey, L. (2002) The best years of your life? Contrasting the local and non-local student experience in higher education. BERA. Smith, F. (2004) It’s not all about grades: accounting for gendered degree results in geography at Brunel University. Journal of Geography in Higher Education. 28(2), 167–78. Stokes, C. W., Cannavina, C., Cannavina, G. (2005) The state of readiness of student health professionals for web-based learning environments. Health Informatics Journal. 10(3), 195– 204.

The student learning experience in higher education 109

The Higher Education Academy 2008

Sutherland, P. (2003) Case studies of the learning and study skills of first year students. Research in Post-Compulsory Education. 8(3), 425–40. Thomas, L. (2002a) Building social capital to improve student success. BERA. Exeter. Thomas, L. (2002b) Student retention in higher education: the role of institutional habitus. Journal of Education Policy. 17(4), 423–42. Todd, M., Bannister, P., Clegg, S. (2004) Independent inquiry and the undergraduate dissertation: perceptions and experiences of final-year social science students. Assessment and Evaluation in Higher Education. 29(3), 335–56. Trigwell, K., Ashwin, P. (2003) Undergraduate students’ experience of learning at the University of Oxford. Oxford: Institute for the Advancement of University Learning, University of Oxford. Turner, P. (2004) An analytical model for understanding undergraduate learning during placements and practical laboratory classes. BERA New Researchers/Student Conference. Manchester. Yorke, M. (2000) The quality of the student experience: what can institutions learn from data relating to non-completion. Quality in Higher Education. 6(1), 61–75.

Assessment and Feedback Brown, N., Graff, M. (2004) Student performance in business and accounting subjects as measured by assessment results: an exploration of the relevance of personality traits, identified using meta programmes. International Journal of Management Education. 4(1), 3–18. Crook, C., Gross, H. and Dymott, R (2006) Assessment relationships in higher education: the tension of process and practice. British Educational Research Journal. 32(1), 95–114. Fellenz, M. (2004) Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education. 29(6), 703–20. Fitzpatrick, J. (2006) An evaluative case study of the dilemmas experienced in designing a selfassessment strategy for Community Nursing students. Assessment and Evaluation in Higher Education. 31(1), 37–53. Freewood, M., Spriggs, L. (2003) Striving for genuine inclusion: academic assessment and disabled students. In: C. Rust (ed.) Improving student learning: Theory and practice, 10 years on. Oxford: Oxford Centre for Staff and Learning Development. Gibbs, G., Lucas, L. (1997) Coursework assessment, class size and student performance: 1984–94. Journal of Further and Higher Education. 21(2), 183–92. Harris, R. (2005) Testing times: traditional examination and asynchronous learning. Journal of Geography in Higher Education. 29(1), 101–14. Higgins, R. (2000) Be more critical!: Rethinking assessment feedback. BERA. Cardiff. Hills, L., Glover, C. (2005) Evaluating the evaluation tools: methodological issues in the FAST project. BERA. Glamorgan. Hinett, K. (1998) The role of dialogue and self assessment in improving student learning. BERA. Belfast. James, D. (1997) Making the graduate: Assessment events as social practices. BERA. York. Knight, J. (2004) Comparison of student perception and performance in individual and group assessments in practical classes. Journal of Geography in Higher Education. 28(1), 63–81. Langan, M., Wheater, P., Shaw, E., Haines, B., Cullen, R., Boyle, J., Penney, D., Oldekop, J., Ashcroft, C., Lockey, L., Preziosi, R (2005) Peer assessment of oral presentations: effects of student gender, university affiliation and participation in the development of assessment criteria. Assessment and Evaluation in Higher Education. 30(1), 21–34. MacDonald, J. (2002) Developing competent e-learners: the role of assessment. Learning Communities and Assessment Cultures Conference. University of Northumbria. MacDonald, J. (2004) Developing competent e-learners: the role of assessment. Assessment and Evaluation in Higher Education. 29(2), 215–26. MacMillan, J., McLean, M. (2005) Making first-year tutorials count: operationalising the assessmentlearning connection. Active Learning in Higher Education. 6(2), 94–105.

The student learning experience in higher education 110

The Higher Education Academy 2008

McDowell, L., Mowl, G. (1995) Innovative assessment: its impact on students. In: G. Gibbs (ed.) Improving student learning through assessment and evaluation. Oxford: Oxford Centre for Staff Development. Norton, L. (2004) Using assessment criteria as learning criteria: a case study in psychology. Assessment and Evaluation in Higher Education. 29(6), 687–702. Orsmond, P., Merry, S., Reiling, K. (2005) Biology students’ utilisation of tutors’ formative feedback: a qualitative interview study. Assessment and Evaluation in Higher Education. 30(4), 369–86. Pitts, S. (2005) ‘Testing, testing...’: How do students use written feedback? Active Learning in Higher Education. 6(3), 218–29. Rust, C., O’Donovan, B., Price, M. (2005) A social constructivist assessment process model: how the research literature shows us this could be best practice. Assessment and Evaluation in Higher Education. 30(3), 231–40. Smyth, K. (2004) The benefits of students learning about critical evaluation rather than being summatively judged. Assessment and Evaluation in Higher Education. 29(3), 369–78. Somervell, H., Allan, J. (1995) Self-assessment in two education studies modules at level 2/3 at the University of Wolverhampton. In: J. Allan (ed.) Approaches to learning in higher education. Wolverhampton: University of Wolverhampton, Educational Research Unit, 6–12. Thomson. K., Falchikov, N. (1998) Full on until the sun comes out: the effects of assessment on student approaches to studying. Journal of Assessment and Evaluation in Higher Education. 23(4), 379–90. Tooth, D., Tonge, K., McManus, I.C. (1989) Anxiety and study methods in preclinical students: causal relation to examination performance. Medical Education. 23(5), 416–21. Trotter, E. (2006) Student perceptions of continuous summative assessment. Assessment and Evaluation in Higher Education. 31(5), 505–22. Weaver, M. (2006) Do students value feedback? Students’ perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education. 31(3), 379–94.

Quality assurance and enhancement8 Audin, K. et al. (2003) University Quality of Life and Learning (UNIQoLL), an approach to student well being, satisfaction and institutional change. Journal of Further and Higher Education. 27(4), 365–82. Barrie, S., Ginns, P., Prosser, M. (2005) Early impact and outcomes of an institutionally aligned, student focused learning perspective on teaching and quality assurance. Assessment and Evaluation in Higher Education. 30(6), 641–56. Bewick, B. M., Bradley, M., Barkham, M. (2004) Student perceptions of the University of Leeds Experience. University Quality of Life and Learning (UNIQoLL) Project – Report 1. Leeds: University of Leeds. Coffey, M., Gibbs, G. (2001) The evaluation of the Student Evaluation of Educational Quality Questionnaire (SEEQ) in UK higher education. Assessment and Evaluation in Higher Education. 26(1), 89–93. Green, D., Brannigan, C., Mazelan, P., Giles, L. (1994) Measuring student satisfaction: A method of improving the quality of the students’ experience? In: S. Haselgrove (ed.) The student experience. Buckingham: SRHE/OUP, 100–07. Gregory, R., Thorley, L., Harland, G. (1995) Using a student experience questionnaire for improving teaching and learning. In: G. Gibbs (ed.) Improving student learning through assessment and evaluation. Oxford: Oxford Centre for Staff Development. McKenzie, J., Sheely, S., Trigwell, K. (1998) Drawing on experience: An holistic approach to student evaluation of courses. Assessment and Evaluation in Higher Education. 23(2), 153–63. Otter, S. (1992) Learning Outcomes in Higher Education. London: Unit for the Development of Adult Education. 8

Supplemented by additional search strategy

The student learning experience in higher education 111

The Higher Education Academy 2008

Rhodes, C., Nevill, A. (2004) Academic and social integration in higher education: a survey of student satisfaction and dissatisfaction within a first-year education studies cohort at a new university. Journal of Further and Higher Education. 28(2), 179–93. Townley, P. (2001) The construction of a model of qualitative evaluation to support the development of the policy and practice of raising student satisfaction in an institution in the higher education sector: Using focus groups as a research instrument in the pursuit of qualitative based research. Higher Education Close Up Conference 2. University of Lancaster. Trowler, P., Fanghanel, J., Wareham, T. (2005) Freeing the chi of change: the Higher Education Academy and enhancing teaching and learning in higher education. Studies in Higher Education. 30(4), 427–44. –

The student learning experience in higher education 112

The Higher Education Academy 2008