A review of Australian and international quality systems ... - CiteSeerX

15 downloads 90338 Views 1MB Size Report
systems of accreditation, quality processes and audit and requirements to provide .... schools and vocational education where reforms have resulted in an ...
A review of Australian and international quality systems and indicators of learning and teaching

August, 2007 V 1.2 Denise Chalmers

© Carrick Institute for Learning and Teaching in Higher Education Ltd, 2007 This work is copyright. Apart from any use as permitted under the Copyright Act 1968, no part may be reproduced by any process without prior written permission from the Carrick Institute for Learning and Teaching in Higher Education Ltd. an initiative of the Australian Government Department of Education, Science and Training.

The views expressed in this report do not necessarily reflect the views of the Carrick Institute for Learning and Teaching in Higher Education Ltd or the Australian Government

Denise Chalmers can be contacted at the Carrick Institute for Learning and Teaching in High Education Ltd, PO Box 2375, Strawberry Hills NSW 2012, through the website: www.carrickinstitute.edu.au or email: [email protected]

2

CONTENTS

ACKNOWLEDGEMENTS .................................................................................... 5 EXECUTIVE SUMMARY ..................................................................................... 6 Indicators of quality teaching and learning ................................................................... 8

INTRODUCTION ................................................................................................. 9 SECTION 1: AUSTRALIAN INITIATIVES IN QUALITY TEACHING AND LEARNING IN HIGHER EDUCATION ........................................................... 11 The Higher Education Quality Framework and recent quality initiatives ..................... 13 Quality auditing .......................................................................................................... 15 Other Commonwealth quality initiatives ..................................................................... 16 Funding initiatives for learning and teaching quality ................................................... 20 Summary of the Australian higher education context.................................................. 24

SECTION 2: GLOBAL TRENDS AND QUALITY INITIATIVES IN TEACHING AND LEARNING ........................................................................................... 26 2.1: Global trends in teaching and learning ............................................................... 26 2.2: Indicators of student experience, satisfaction and engagement ......................... 42 2.3: Indicators of student learning ............................................................................. 55 Summary of global trends and issues......................................................................... 68

SECTION 3: INDICATORS OF QUALITY TEACHING AND LEARNING .......... 70 Performance indicators in common use ..................................................................... 71 Institutional concerns about national level performance indicators ............................. 75 Institution level performance indicators supported by evidence.................................. 79 Institutional climate and systems................................................................................ 80 Diversity and inclusivity .............................................................................................. 85 Assessment ............................................................................................................... 88

3

CONTENTS Engagement and learning community ........................................................................ 92 Benchmarking in institutions....................................................................................... 95 A review of current practice in teaching and learning in Australian universities .......... 96 Summary of indicators of quality teaching and learning ............................................. 98 Conclusion ................................................................................................................. 99

REFERENCES ................................................................................................ 100 LIST OF ACROMYMS ..................................................................................... 121

4

A review of Australian and international quality systems and indicators of learning and teaching

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

ACKNOWLEDGEMENTS In undertaking this report, a number of people have made a significant contribution through provision of expertise, support and advice. The members of the Carrick Board provided support when they approved the project proposal and allocated funding in 2006. The members of the Carrick Reference Group and the Carrick Institute Directors have provided valuable advice and insights. This report could not have been written without the contributions of the Carrick Institute Project research team and the significant work undertaken on the Stage 1 studies: •

International and national indicators and outcomes of quality teaching currently in use. Ms Katie Lee



Higher education teacher and teaching indicators and outcomes and their evidence base at institutional through to individual teacher levels. Ms Tina Cunningham



University teacher, teaching indicators and outcomes in use in Australian Universities (Survey of practice). Ms Kate Thomson



Learning indicators and outcomes at the international, national, institutional and individual teacher levels. Ms Julia Gobel



Student surveys on teaching and learning in use in Australian institutions. Dr Simon Barrie, Dr Paul Ginns, & Ms Rachel Symons, Institute for Learning and Teaching, University of Sydney.

Thirty-four universities nominated a number of people to provide advice on their institutional policies and practices. In turn, these people sought information and advice from others. To all of these people we are indebted for their willing support. Professor Judyth Sachs provided valuable advice and comment on the structure and design of Stage 1 of the Carrick project and on this report. Professor Sachs has also provided leadership in agreeing to lead the pilot universities review of the framework. It should be noted that the findings, conclusions and wording of the report are the responsibility of the author and do not necessarily reflect the views of the Reference Group or the Carrick Institute.

Denise Chalmers Carrick Institute for Learning and Teaching in Higher Education

5

A review of Australian and international quality systems and indicators of learning and teaching

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

EXECUTIVE SUMMARY This report provides an overview of the quality processes and trends in teaching and learning in Australia and in several OECD countries and indicators of teaching and learning performance at the national and university level. It briefly outlines the project initiated by the Carrick Institute for Learning and Teaching in Higher Education (Carrick Institute). Australia has an established and effective quality framework for higher education. The national government has systematically implemented quality reviews and audits, established frameworks and guidelines for accreditation and established mechanisms by which quality research and teaching can be identified. Within the higher education sector, there is much that has been achieved and recognised as leading practice: the early initiative of administering national student course experience and graduate destinations surveys has triggered the implementation of similar practices elsewhere. The quality auditing process is well regarded and is considered effective and practical. The proposed research quality audit framework is attempting to avoid some of the more problematic aspects of other national systems and capture the impact dimension. The national data collection process through the Institutional Assessment Framework has evolved into its current form to improve the quality of the national data collection methods. Many of the global trends that have been noted in the review of international practices in teaching and learning are already well established or in train in the Australian higher education sector. These include: •

National student experience survey (CEQ)



National graduate destination survey (GDS)



National system of quality auditing on a 5 year cycle (AUQA)



National accreditation protocol and qualification framework



National data collection of information related to students and universities (IAF)



National fund to reward quality teaching and learning (LTPF)



National awards for quality teaching (CAAUT)



National institute to provide a national focus for the enhancement of learning and teaching in higher education institutions (Carrick Institute for Learning and Teaching in Higher Education)



National research quality framework (RQF)



Transnational Quality strategy for international education on and off shore

The Australian higher education sector has achieved these significant initiatives in collaboration with the Commonwealth, the States and the higher education institutions.

6

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

Global trends and initiatives in teaching and learning A pervasive trend across all of the countries reviewed is the establishment of national systems of accreditation, quality processes and audit and requirements to provide information on performance indicators. Performance indicators at the national/regional level fall into five broad categories. 1. Common institutional indicators that are required by quality audit and accreditation processes 2. Centralised collection of mandated data that may be subsequently reported in national/regional reports 3. Survey data from students on their satisfaction, engagement, learning experiences and employment 4. Tests of learning: readiness, generic, professional/graduate admissions 5. Ranking and league tables that select data from the centrally collected and publicly available information. Trends evident in higher education include: •

Higher education is now more than ever seen as an economic commodity, with increased interest in linking employment outcomes to higher education (employment and graduate destinations). This in turn has led to interest from governments and funding agencies in measuring the employability of students through measures of learning and their employment outcomes.



There has been a global trend to develop and use performance indicators at the national/sector level, as evidenced by the PISA study, the Measuring Up reports and international rankings.



There is growing interest in identifying ‘direct measures’, particularly of student learning.



There is increasing interest in performance funding based on measures and indicators.



There is a renewed interest in benchmarking at the national and regional level (e.g., the European Higher Education Area).



There is greater emphasis on quality auditing and accreditation within countries and regional groupings (e.g., Bologna, Higher Education Area, U.S.)



In European countries there are steady moves to assign greater autonomy and independence to higher education institutions with less direct involvement from governments through quality auditing and accreditation mechanisms. By way of contrast, there are calls for greater government oversight of higher education institutions in the US through the use of standardised indicators and measures.



There are concerns expressed by researchers and higher education institutions about the impact of national/sector performance indicators on the autonomy and diversity of institutions.

While there are clear trends emerging of greater oversight and desire for standardised measures of learning and effectiveness at the national level, this trend should be

7

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

interpreted cautiously. The more promising measures and indicators are those that are situated in institutional practice. Indicators of quality teaching and learning The majority of work completed on performance indicators in higher education has been undertaken with reference (explicit or implicit) to the expectations of external bodies which have an interest in performance and comparability between universities. Relatively little emphasis has been given to aspects of intra-institutional performance. This report suggests that is that it is at this level where indicators can be most usefully employed, and are most likely to lead to an enhanced learning environment which benefits students. Four dimensions of teaching practice are identified in this report: 1. Institutional climate and systems 2. Diversity and inclusivity 3. Assessment 4. Engagement and learning community Each dimension can draw on an extensive range of indicators and measures that have been shown to provide or have an impact on the quality of student learning and the student and staff experience. If information is collected and interpreted judiciously, the information will provide institutions the opportunity to review their practices and processes in a way that demonstrates effectiveness and provides directions for enhancing the quality of teaching and learning. Once established some of these indicators will be suitable for harnessing for use at the sector and national level. However, it must be recognised that all measures and indicators at this level can only be considered to be proxy at the national/sector level. Fragmentation can occur when institutions are required to collect a battery of information that has little relevance for institutional practice or interpretation, and does not relate to teaching quality or offer little direction for improvement. This report concludes that if there is to be real engagement of higher education institutions in developing and implementing teaching and learning indicators, then the focus needs to be on quality enhancement at the institutional level. Once the measures and indicators are established in institutions, judicious selection of some of these can then be considered for inclusion at the sector and national levels.

8

A review of Australian and international quality systems and indicators of learning and teaching

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

INTRODUCTION This report provides an overview of the quality processes and trends in teaching and learning in Australia and internationally, and briefly outlines the project initiated by the Carrick Institute for Learning and Teaching in Higher Education (Carrick Institute). The Carrick Institute project is conceived as taking place in 4 stages. The first stage, titled Investigation and development of framework involves examining ways in which quality teaching and learning is recognised and rewarded at the individual, institutional, national and international levels. The purpose of this stage is to provide a comprehensive overview of what is currently recognised as quality teaching and learning at each of the four levels and how it is assessed. Several detailed reports will be written and these will form the basis from which a Framework will be developed that will propose indicators of quality teaching and learning at each of the levels. An examination on the ways in which quality teaching is rewarded will also be undertaken at the individual, institutional, national and international levels. The methodology involves ongoing consultation at the four levels, empirical research, literature review and environmental scanning. Stage 2, titled Pilot implementation of framework takes the draft Framework through a process of extensive consultations with the sector and a trial in a limited number of higher education institutions to test its usefulness. This will involve the pilot institutions examining and revising their relevant policies and practices that impact on the quality of teaching and learning, establishing the necessary infrastructure and systems to gather and interpret the data, and implementing strategies to build a culture that values, recognises and rewards quality teaching and learning. Following the trial, the tools and matrices, case studies and guidelines for implementing the framework will be made widely available and promoted within the sector. Stages 3 and 4 involve sector wide up-scaling and benchmarking and are briefly described on the Carrick Institute project website. This report provides an overview of the literature and general trends and issues. These will be developed further in a number of more comprehensive reports. In addition, progress reports will be published and widely disseminated through each stage of the project. Readers of this report are encouraged to refer to these as they become available. This report is structured in three sections: 1. Section 1 focuses on the Australian context and outlines the current practices and initiatives at the national level. 2. Section 2 provides an overview of global initiatives related to the quality of teaching and learning by country or region. Commonly used student surveys and tests of learning used to identify quality of teaching and learning are described. It concludes with issues that surround the use of some of the measures at the national level. 9

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

3. Section 3 provides an overview of performance indicators of teaching and learning that have a substantial evidence base to support their use in institutions, some of which will be suitable to be reported up to the national level.

10

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

SECTION 1: AUSTRALIAN INITIATIVES IN QUALITY TEACHING AND LEARNING IN HIGHER EDUCATION This section focuses on the Australian context and outlines the current practices and initiatives in determining and identifying quality teaching and learning at the national level. Higher education has come under increasing pressure from both external and internal sources in the past 20 years to produce systematic evidence of its effectiveness and efficiency (eg Guthrie & Neumann, 2006; Doyle, 2006; Hayford, 2003). Most commentators believe that this pressure will only increase in the future (Doyle, 2006). This global trend has not been limited to higher education; similar trends are evident in schools and vocational education where reforms have resulted in an increase in centralised and mandated policies and oversight related to standardisation of access, curriculum, teaching and assessment, in contrast to local or state based direction setting, policies and oversight. The desire for greater demonstration of effectiveness and efficiency is sought by both government and the higher education institutions; by the government to demonstrate that public money is well spent and by the institutions to maximise the impact of the money available. But it is not just about economic value, it is also about educational, social and political values (Reindl & Brower, 2001; Trowler et al, 2005, Ward, 2007) Governments and institutions share many values, for example, commitment to widening access and greater participation in higher education. Where institutions and governments often diverge is on the values of autonomy and diversity, with institutions expressing concern with what is perceived as an unnecessarily intrusive degree of oversight and level of accountability, and reporting that is burdensome and considered to provide little value to the institution and its operations (PhillipsKPA, 2006a). Institutions are concerned that future initiatives by the government may result in more oversight of their operations, and greater compliance that may limit their autonomy, flexibility and responsiveness. Governments are concerned about whether the education provided equips the students for employment and will provide the nation with a highly skilled workforce that supports economic and social growth. Governments are accountable to the public and need to demonstrate that a quality education is being provided by institutions, regardless of their particular mission and focus. Higher education institutions have a well established record of reviewing and establishing quality systems. It is asserted that by definition, academic endeavour is embedded in the concept of quality (Anderson, 2006; Vidovich & Currie, 1998) for who would aspire to mediocre teaching and research? Assessment moderation systems, curriculum review, examination of research theses and peer review of research and publications are examples of well established quality systems. Sector wide examples include representative and professional bodies establishing agreed principles, codes of conduct, guidelines and conducting disciplinary reviews and professional accreditation (Guthrie & Neumann, 2006; Brennan & Shah, 2000). Higher education institutions have

11

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

progressively implemented more systematic, formalised quality assurance processes, recognising it as a way to achieve greater efficiencies and accountability within their organisation in increasingly constrained financial times (Burke & Minassians, 2001) as well as an appropriate response to company, corporations and legislative requirements to operate efficiently and effectively. The quality agenda has not been embraced by all in the sector, with many academic staff at best ambivalent and at worst hostile, to what is perceived to be increasing managerialism within universities and unreasonable external pressures from governments and their agents. Empirical studies consistently report academic staffs’ disenchantment with formal notions of quality assurance (Anderson, 2006; McInnis et al, 1994; Newton, 2000, 2002). Reasons for these responses include: differences in understanding of what constitutes quality; concerns about the effectiveness of the mechanisms and processes associated with formal quality assurance processes; doubts about the use of metrics and quantification of complex areas; perceived loss of autonomy and personal power or agency; and effort and time involved in complying with quality requirements with no obvious gain or benefit evident in their own work, or that of their students. These concerns are well founded as the information gathered for institutional and national compliance and quality purposes is largely seen as unrelated to, and removed from, what is important to the teachers and their students – engaging together in the process of learning and teaching. An analysis of national policy initiatives to enhance higher education learning and teaching in the United Kingdom found several of the policy initiatives were based on contrasting underlying theories of change and development (Trowler, Fanghanel & Wareham, 2005). These coexisted with other higher education policies around funding, research and access, each with their own tacit underlying theories to form what was described as an “incoherent ‘policy bundle’ which are implemented with disconnected and disjointed strategies, in an increasing managerialist environment in which work intensification and degradation of resources is occurring” (2005, 432). There is potential for this to occur in Australia where there might be successive initiatives and reforms developed in different sections and departments of government and with little reference to each other. The importance of taking account of the totality of previous initiatives and reforms, and to refer and review them regularly to ensure that government initiatives do not become an incoherent policy bundle cannot be overstated. The at times adversarial responses between the governments and their agents, and higher education institutions is indicative of the ways in which organisations can respond to external pressures for initiatives that are not based on shared values and understandings. The challenge is to build a relationship that recognises the legitimate goals of each stakeholder and works towards productive ways to achieve them. The importance of building a positive relationship between the government and the universities in setting policy programs and reporting is emphasised by PhillipsKPA (2006a), who suggests that there is a growing case for the Commonwealth to reposition its relationship with the universities. The Commonwealth clearly has a vital role in setting the policy and regulatory framework (in conjunction with the states) to ensure that the higher education sector operates in a

12

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

way which promotes the public good. The Commonwealth also has a vital role as the largest single source of funding for the public universities and as a manager of the income contingent loans programs. However, it would be open to the Commonwealth to discharge these roles in a way which positioned universities more as independent enterprises than as entities subject to very specific operational intervention by Government (PhillipsKPA, 2006a, 43).

The Higher Education Quality Framework and recent quality initiatives The Australian Government has taken an active role in promoting quality assurance in universities since the 1980’s, when there was a perceived need for universities to improve their efficiency, effectiveness and public accountability. The first direct initiative was the funding of discipline reviews across the sector from 1985 to 1991. These were conducted by the Commonwealth Tertiary Education Commission to determine standards and to improve quality and efficiency in major fields of study such as Engineering, Law, and Computing. In 1989, the Government commissioned a team led by Professor Russell Linke to find performance indicators to assess the quality of higher education. The Linke report asserted that quality would be best assessed using multiple indicators that are “sensitively attuned to the specific needs and characteristics of particular disciplines, and which adequately reflect the underlying purpose for which the assessment is required” (Performance Indicators Research Group, 1991, pp. 129 – 130). The report also suggested that judgements of the quality of teaching must flow from the analysis of multiple characteristics, and involve a range of procedures including qualitative peer and student evaluation (Performance Indicators Research Group, 1991). Three categories of indicators on teaching and learning were identified: quality of teaching; student progress and achievement; and graduate employment. Category of indicator Quality of teaching Student progress and achievement

Graduate employment

Performance indicator Perceived teaching quality – Course Experience Questionnaire (CEQ) Student progress rate Program completion rate Mean completion time Research higher degree productivity rate Graduate employment status

In 1992, quality assurance was moved from a discipline to a whole of institution approach with the Government establishing the Committee for Quality Assurance in Higher Education (CQAHE) to conduct independent audits of institutional quality assurance policies and procedures, and to advise the Government on funding allocations. Significant funding was to be allocated to universities that could “demonstrate effective quality assurance practices and excellent outcomes” (CQAHE, 1995, p. 26). Institutions received differential levels of funding depending on their performance in the reviews.

13

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

Three cycles of annual reviews were conducted from 1993 to 1995, which involved an institutional self-evaluation and a site visit from the review team. Each cycle had a particular focus with teaching and learning the focus of the 1994 review. Each institution’s goals and strategies for teaching and learning were examined and the following areas were assessed: overall planning and management of undergraduate and postgraduate teaching and learning programmes; curriculum design; delivery and assessment; evaluation, monitoring and review; learning outcomes; use of effective teaching and learning methods; student support services and other teaching support services such as library and computer facilities; staff recruitment, promotion, and development; and postgraduate supervision. There were mixed opinions of how helpful the external reviews had been. On the DEST website it is claimed that these reviews triggered considerable change within institutions through the identification of gaps and measurement of outcomes that took place as a result of the self-assessment and review process. The National Board of Employment, Education and Training (NBEET) noted that much of the criticisms of the quality reviews stemmed from the use of rankings which was considered to be detrimental to achieving improvement of the system, stating that “…the quality groupings have been interpreted by the press and the university community as rank orderings ... In addition, there is evidence that the publication of the groups has been disadvantageous to the lower grouped institutions in their international marketing and has had financial implications for their future operation” (NBEET, 1995, p. 22). In truth, both these perspectives are valid, as there were considerable advances made in establishing effective institutional practices, but at the expense of institutions at the lower band, which suffered loss of funding and reputation. There are echoes of these same issues in the current funding model and outcomes of the Learning and Teaching Performance Fund. Following the CQAHE reviews, there was a brief lull in quality assurance initiatives from the Commonwealth other than the requirement for universities to submit an annual institutional Quality Assurance and Improvement Plan to the Government as part of the educational profiles process (1998 onwards). Universities were required to outline their goals, strategies, and performance data such as attrition and retention rates, and aggregated data from the Course Experience Questionnaire (CEQ) and Graduate Destinations Survey (GDS). The Commonwealth subsequently generated reports on the quality and outcomes in Australian universities related to courses, students, staff, finances, international enrolments, drawing from the institutional reports and centrally gathered data (DEST Statistics publication website). The next major quality assurance initiatives came in 2000 when the Ministerial Council on Employment, Education, Training and Youth Affairs (MCEETYA) endorsed the National Protocols for Higher Education Approval Processes (National Protocols), and established an independent audit agency, the Australian Universities Quality Agency (AUQA). The National Protocols first came into effect in 2000. They promote common principles, criteria, processes, and standards, across States and Territories for the approval of higher education institutions and programs both in Australia and offshore. All higher education institutions must receive approval to operate and offer a higher education course in Australia. It is designed to assure students and other stakeholders that higher

14

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

education institutions have met identified requirements and are subject to appropriate government regulation. The Protocols have recently being revised and will come into effect in 2008 (MCEETYA, 2006). Performance indicators are currently being developed as minimum standards to achieve against specific criteria relating to scholarship; research; breadth and depth of qualifications offered; and scholarship and research supervision in higher education institutions. These indicators are more extensive than existed previously and require both quantitative and qualitative evidence be provided by institutions for initial and ongoing accreditation. The current approach to quality assurance in Australia is underpinned by the universities’ status as self-accrediting higher education institutions with the legislative authority to accredit their own courses and programmes. This means that they have the legal capacity to accredit their own courses and programmes and are publicly recognised on a register of the Australian Qualifications Framework (AQF), a unified system of national qualifications in post-compulsory education and training. Since its inception in 2000, all accredited institutions have been required to develop programmes under the AQF which specifies qualification titles, descriptors, and expected learning outcomes for each level of a qualification.

Quality auditing The Australian Universities Quality Agency (AUQA) was established in 2000 by MCEETYA to provide an independent, national quality assurance agency to promote, audit, and report on quality assurance in higher education. AUQA’s primary responsibility is to audit the effectiveness of an institution’s quality assurance system every five years. The process involves an institutional self-evaluation, a site visit by a review team, and the publication of the results of the review which contain commendations and recognition of good practice, and recommendations identifying areas for improvement. Institutions provide interim reports to AUQA on their progress in implementing the recommendations. AUQA advocates a ‘fitness for purpose’ approach which respects the diversity of institutions. In the first cycle of audits, institutions were audited against their own mission and objectives, not a mandated list of performance standards. However, AUQA’s audit manual V3 did specify the scope of an institutional audit should include: organisational leadership and governance, planning; teaching and learning (all modes); processes for program approval and monitoring; comparability of academic standards in onshore and offshore programs; research activities and outputs, including commercialisation; community service activities; internationalisation, including contracts with overseas partners; support mechanisms for staff and students; communication with internal and external stakeholders; systematic internally initiated reviews (e.g., of departments, themes), including the rigour and effectiveness of the review mechanisms employed; and administrative support and infrastructure (AUQA Audit Manual, Version 3.0, 2006). AUQA also reviews universities’ compliance with externally set protocols such as the National Protocols, and other recognised national and international standards, guidelines and good practice principles. A particular emphasis for cycle 2 audits is on institutions outlining how they use the relevant guidelines and legislation to determine their practices and processes and determine their standards and performance outcomes. They are expected to provide an overview of benchmarking activities and outcomes undertaken 15

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

since the last AUQA visit with an emphasis on the impact of such benchmarking on the institution’s outcomes (AUQA Audit manual, Version 4.1, 2007). Cycle 2 of the AUQA audits emphasise a stronger focus on performance, outcomes, and standards, along with the use of external reference points and benchmarking activities. The emphasis on greater use of performance outcomes and standards in the second cycle signals a shift to include a ‘fitness for purpose’ approach.

Other Commonwealth quality initiatives Promoting and supporting benchmarking To facilitate a more consistent approach to benchmarking within the higher education sector, the Commonwealth commissioned McKinnon, Walker, and Davis to produce a benchmarking manual. The Benchmarking: A manual for Australian universities (1999) report identified 67 benchmarks in nine areas of university activity: governance; planning and management; external relationships; financial and physical infrastructure; learning and teaching; student support; research; library and information services; internationalisation; and staffing. A review of the universities’ use of the manual found that while it had stimulated interest among universities in benchmarking, there was a general consensus that the manual was not considered helpful, and as a consequence, had not been widely used. The few universities that had used the manual to carry out benchmarking did not believe their university performance had improved as a result (Garlick & Pryor; 2004). Its primary uses were: a reference for ideas; to identify criteria for developing performance indicators, a guide to performance reporting, a mechanism to identify areas for evaluation, and for financial planning. The overwhelming conclusion was that the weaknesses of the manual outweighed its utility. Criticisms include: it was considered overly complex and difficult to comprehend, difficult to apply to specific circumstances; focused on accountability rather than encouraging improvement; focused on senior management and ignored the valid involvement of a range of relevant stakeholders within and outside the university; was overly prescriptive; contextualised within a traditional university environment and therefore not appropriate to regional/newer/flexible learning/non-traditional student base/non-research intense institutions; was a one size fits all tool that did not allow for diversity either between universities or across functions within the one university; was more suited to the needs of the larger universities; was only of generic and superficial use rather than capable of practical and useful guidance; and did not involve a collaborative improvement approach (Garlick & Pryor; 2004). A major question surrounded the meaning of ‘good practice’, a term that was peppered throughout the manual. The manual provided no guidance on what constituted ‘good practice’, or guidance on how ‘good practice’ might be determined. It was concluded that there would be little benefit to the Commonwealth in investing further in the development of the manual (Garlick & Pryor, 2004).

16

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

Research Quality Framework Considerable effort has been expended on the Research Quality Framework (RQF) initiative which has the expressed purpose of developing an improved assessment of the quality and impact of publicly funded research and an effective process to achieve this. This initiative was announced in 2004 as part of the Backing Australia’s Ability – Building our Future through Science and Innovation package. The intention is to use the information gathered through this process to redistribute a significant proportion of university block funding ‘to ensure that research areas of the highest quality and highest impact are rewarded’. After extensive consultation and a number of proposed models, the RQF has been finalised and is scheduled for implementation in 2008.

Australian Education International (AEI) The Australia Education International (AEI) is a section of DEST that collaborates across national, state and department levels and with industry partners to facilitate a sustainable education and training export industry. Their brief extends beyond higher education and includes schools and vocational and further education. Their aims include increasing recognition of Australia’s education systems and qualifications, and facilitating the enhancement of the education industry.

The Brisbane Communiqué In April 2006, Ministers and senior officials from 27 countries met at the inaugural AsiaPacific Education Ministers’ Meeting and launched the Brisbane Communiqué Initiative. They agreed to collaborate on: •

quality assurance frameworks for the region linked to international standards, including courses delivered online



recognition of educational and professional qualifications



common competency based standards for teachers, particularly in science and mathematics



the development of common recognition of technical skills across the region in order to better meet the overall skills needs of the economic base of the region.

A Senior Officials’ Working Group, currently chaired by Australia through DEST, was established to progress the Brisbane Communiqué. One of the activities of this group was to commission a scoping study to review the quality assurance frameworks of the member countries. This report is currently being finalised and is expected to be released in late 2007 (Brisbane Communiqué Initiative, DEST website).

European Union-Australia cooperation in higher education and vocational education and training The primary aim is to promote understanding between the peoples of the European Union (EU) and Australia and improve the quality of their human resource development in order to increase academic cooperation and improve student mobility between Australia and the EU. The EU and Australia initiated a pilot phase of projects on cooperation in higher education starting in 2002, and continuing in 2003 and 2004. The fourth round of 17

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

collaborative projects between institutions in Australia and the EU has recently been announced and the areas of focus are: •

Building an interdisciplinary collaborative program in Business, Environment, Science and Technology



Global citizenship: European and Australian perspectives



Governance and security: Challenges to policing in the 21st century



Network of undergraduate degrees in ethics, human rights and institutions

The Bologna Process The Bologna Process aims to achieve greater consistency and portability across higher education institutions in Europe. For Australian institutions to continue to uphold international standards and remain attractive to domestic and international students, they must consider how best to respond to global developments such as Bologna, and focus on maintaining diversity and quality. A commitment made by Australia in relation to the Bologna Process is to provide all Australian students with a ‘diploma supplement’ on graduation with their transcript. There have been two earlier pilots to explore the feasibility of this, with a collaborative project now established by DEST and led by the University of Melbourne. Other Initiatives There are also other activities in which Australia is involved that relate to International Education. These include the •

Development of a Transnational Quality strategy to ensure the quality and integrity of Australian education and training both on and off shore



AEI-National Office of Overseas Skills Recognition which provides information and advice regarding overseas and Australian qualifications



International network directory which represents Australia’s interests overseas



Provider Registration and International Student Management System which enables compliance of Australian Institutions with the Education Services for Overseas Student Act 2000.

Institutional Assessment Framework (IAF) The Institutional Assessment Framework (IAF) superseded the Educational Profiles process in 2005 (DEST, updated 2007). The IAF aims to cultivate a ‘strategic bilateral engagement with each higher education provider’. The purpose of the IAF is to ensure institutional quality, accountability and sustainability through minimising and rationalising national reporting requirements. Data gathered for the IAF is reported on the DEST website. Universities are assessed by the Commonwealth on a range of qualitative and quantitative data from institutional and external sources. Biennial bilateral strategy meetings between individual institutions and DEST are based on this Commonwealth

18

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

assessment. Supplementary meetings can be organised as specific needs arise, such as an institution’s concerns with its previous assessment. The IAF addresses four elements: 1) Organisational sustainability; which establishes the feasibility of institutionally provided services and focuses on a) strategic focus b) risk management c) financial viability 2) Achievements in higher education provision; which determines whether the Government’s higher education objectives have been met and focuses on a) teaching/learning b) research and research training c) equity and indigenous access 3) Quality of outcomes; which draws on a range of indicators including GDS, CEQ, student entrance scores, student attrition rates and progress rates, concentrating on those that relate to a) systems and processes b) teaching/learning c) research d) AUQA audit 4) Compliance; which ensures effective and appropriate expenditure, in line with legislative and administrative requirements and focuses on a) financial acquittal b) national governance protocols c) workplace reform d) programme guidelines and legislation DEST also require information on the following: •

Strategic planning



Capital asset management plans



Equity



Indigenous education statement



Student load data



Research and research training management report (not required in 2007)

The information required is a mixture of quantitative and qualitative information. In large part the information is used to inform funding decisions and monitor the viability and sustainability of the institutions. Some aspects of the IAF are believed to increase the administrative burden on universities and detract from its inherent value (PhillipsKPA, 2006). Of particular interest for this report is that the information gathered through the IAF process is a source of some of the data used for the Learning and Teaching Performance Fund. 19

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

Funding initiatives for learning and teaching quality The Commonwealth announced in 2004 via the Backing Australia’s Future package, three initiatives to support the quality of teaching and learning in universities; the establishment of the Carrick Institute for Learning and Teaching in Higher Education, an expansion of the Awards for University Teaching and the Learning and Teaching Performance Fund (LTPF).

The Carrick Institute for Learning and Teaching in Higher Education The Carrick Institute was established in 2004 by DEST, with its first year of operations in 2006. The purpose of the Institute is to provide a national focus to enhance learning and teaching in Australian higher education providers. The Carrick Institute was preceded by a number of Commonwealth initiatives to promote and support teaching and learning in higher education, each of which operated for fairly short periods under national committees: the Commonwealth Staff Development Fund (CSDF), established in 1990; the Committee for Australian University Teaching (CAUT), established in 1992; the Committee for University Teaching and Staff Development (CUTSD), established in 1997; and the Australian Universities Teaching Committee (AUTC) established in 2000. The Carrick Institute's responsibilities include: Management of a major competitive grants scheme for innovation in learning and teaching; •

Liaison with the sector about options for articulating and monitoring academic standards;



Improvement of assessment practices throughout the sector, including investigation of the feasibility of a national portfolio assessment scheme;



Facilitation of benchmarking of effective learning and teaching processes at national and international levels;



Development of mechanisms for the dissemination of good practice in learning and teaching;



Management of a programme for international experts in learning and teaching to visit Australian higher education providers and the development of reciprocal relationships with international jurisdictions; and



Coordination of the Australian Awards for University Teaching, including the Awards presentation event (see below).

The Institute is governed by a board appointed by the Minister for Education, Science and Training.

Awards for University Teaching The Australian Awards for University Teaching (AAUT) were established in 1997 by the Australian Government to celebrate and reward excellence in university teaching. Each year, approximately 15 awards were made to recognise outstanding teaching, whether by individuals or teams. In 2006 these awards were increased to 250 awards in different

20

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

categories and renamed the Carrick Awards for Australian University Teaching (CAAUT). The Carrick Institute administers these awards on behalf of the Commonwealth.

Learning and Teaching Performance Fund (LTPF) The LTPF scheme was established by the Australian Government in 2003 as part of Our Universities: Backing Australia’s Future scheme to reward the “higher education providers that best demonstrate excellence in learning and teaching”. Funding of $54.4 million in 2006, $83 million in 2007 and over $83 million in 2008 was allocated as part of the Government’s renewed focus on teaching quality in Australian universities. The rationale for the fund was to promote overall quality of the sector and place excellence in learning and teaching alongside research excellence as a valued contribution to Australia’s knowledge systems. The LTPF scheme involves two stages. Entry into the second stage is contingent upon satisfaction of meeting the requirements of the first. The first stage requires institutions to submit evidence of a teaching and learning plan, professional development practices and opportunities, probation and promotion policies, evidence of canvassing student satisfaction, and publication of such information on the institution’s website. Institutions that progressed to the second stage were then assessed on rates of graduate employment, further study, student satisfaction, retention and progression. There have been two cycles of funding allocation under the LTPF. In the first round in 2006, 14 of the 38 participating institutions shared in $54 million. In the second round in 2007, the assessment was broadened from a whole of institution to broad discipline grouping and 30 of the 38 participating universities shared in $83 million.

The development and implementation of the LTPF The development of the LTPF and its implementation has been characterised by a process of extensive consultation on the initial model and its application between DEST and the higher education sector, including the AVCC Working Group on Learning and Teaching and a subsequent Advisory Group of representatives from the higher education sector. As concerns were raised with the adjustment methodology in the first round, Access Economics was commissioned to examine the validity and reliability of the key indicators and methodologies (2005). The indicators used were taken from the Graduate Destination Survey (full-time employment and further full or part-time study), Course Experience Questionnaire (satisfaction with generic skills, satisfaction with good teaching, and overall satisfaction), and DEST’s annual university statistics collection (progress and retention rates). In an effort to account for differences in university performance, indicators were adjusted for potentially confounding variables. Not surprisingly, the indicators that were chosen reflected Linke’s (1991) categories of performance indicators: quality of teaching, student progress and achievement, graduate employment. Another cycle of consultation was instigated by the DEST discussion paper (Dec, 2005) which canvassed a number of issues that arose from the first LTPF round. The AVCC responded with a detailed proposal (AVCC, May 2006) as did many individuals and institutions with over 50 submissions received. An LTPF Advisory Group was established

21

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

to consider the submissions and issues, and make recommendations to the Minister. All recommendations were accepted by the Minister for the second round of the fund (June, 2006). The adjusted model accounted for disciplines and resulted in more institutions receiving funding in the second round. As can be seen in Table 1.1 the LTPF is not distributed evenly across the system. Of the 38 universities that participated in the LTPF process, $67.3 million, or approximately half the available funds were allocated to only 5 universities; $98.8 million, or seventy-two percent of the funds were allocated to the first 10 universities. The remaining 19 successful universities received smaller allocations from the remaining 28% of the fund and 8 universities received no funding in either year. Put more crudely, approximately one quarter of the institutions received three-quarters of the funds, a further two quarters of the institutions received one quarter of the funds, and one quarter received no funding. The distribution of funds was also unequal for type of institution and location, with 7 of the Go8 universities in the top 10 universities receiving the most funding over the two year period, and 9 of the 10 highest funded universities located on the eastern seaboard of Australia. Concerns remain around the suitability of the performance indicators used, the adjustment methodology, the quality of the data, and differential outcomes of the fund. Issues surrounding the use of indicators to allocate funding are discussed further in this report. Additional issues include: • DEST remains committed to increasing the transparency of the adjustment process and provided detailed information on the adjustment methodology through technical and commissioned reports. In addition DEST has commissioned a project to further refine the adjustment methodology for the 2008 round. • The issue of the quality of the data relates to the administration of the surveys and response rate from the students. The need for improved administration practices has been addressed by the AVCC and the GCA through their published Code of Practice and Standard Methodology guides for the administration of the GDS, CEQ and PREQ. The issue of response rates by the students remains problematic and several strategies may need to be considered to improve this. • The quality of the data from the DEST data collected through the IAF on the extent that retention and progress are recorded has been raised as an issue, as has the consistency of institutional practices in recording this information. The ability to track students through a student identifier may provide an effective tool in tracking student progress and retention in the higher education system and may lead to retention, progress and completion being defined differently in the future. Further consultation between DEST and the institutions is needed to continue to improve the quality of the data provided through standardisation of definition and administration processes. • The use of lagging data to allocate funding has been raised as a concern by a number of parties. For example, the data used to allocate the 2006 fund was drawn from 2003, the 2007 fund was drawn from 2004 data, and the 2008 fund will be drawn from 2005 data. • The model for funding has been largely determined for the first three rounds of the LTPF. The possibility of changing the model was canvassed in the LTPF Future

22

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

Directions Discussions Paper (DEST, Dec, 2005), when the LTPF moves into its second cycle. Table 1.1: LTPF funding allocation by institution (2006 & 2007)

Institution Australian Catholic University Australian Maritime College Central Queensland University Charles Darwin University Charles Sturt University Curtin University of Technology Deakin University

LTPF 2006 $2,110,000

LTPF 2007 $500,000

$1,143,000

Total (‘06 + ’07) $2,610,000 $1,143,000

-

-

-

-

-

$1,461,601

-

-

-

$500,000

Edith Cowan University Flinders University

-

-

-

$1,926,237

$1,926,237

Griffith University

-

$500,000

$500,000

James Cook University La Trobe University

-

$500,000

$500,000

-

$2,422,052

$2,422,052

Macquarie University

-

$2,994,432

$2,994,432

Monash University

$4,591,000

$4,253,696

$8,844,696

Murdoch University

$2,034,000

$3,329,942

$5,363,942

Queensland University of Technology RMIT University

-

-

-

$500,000

$500,000

Southern Cross University Swinburne University of Technology The Australian National University

-

$500,000

$500,000

$3,852,000

$2,519,587

$6,371,587

$2,060,000

$3,967,437

$6,027,437

23

$1,461,601

$500,000

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

The University of Adelaide The University of Melbourne The University of New England The University of New South Wales The University of Newcastle The University of Queensland The University of Southern Queensland The University of Sydney The University of Western Australia The University of Western Sydney University of Ballarat University of Canberra University of South Australia University of Tasmania University of Technology, Sydney University of the Sunshine Coast University of Wollongong Victoria University TOTAL

-

$1,342,131

$1,342,131

$9,853,000

$8,908,476

$18,761,476

$2,218,000

$1,506,575

$3,724,575

-

$6,650,133

$6,650,133

-

-

$10,424,000

$8,050,250

$18,474,250

$4,950,000

$6,287,722

$11,237,722

$2,580,000

$4,226,105

$6,806,105

-

-

$1,560,000 $1,898,000 -

$1,632,667 $1,735,582 -

$3,192,667 $3,633,582

-

$2,434,054

$2,434,054

-

$5,555,451

$5,555,451

-

$500,000

$500,000

$5,108,000

$5,417,632

$10,525,632

-

$1,878,229

$1,878,229

$54 million

$83 million

$137 million

-

Summary of the Australian higher education context The Commonwealth has established an effective quality framework for higher education in Australia. It has systematically implemented quality reviews and audits, established frameworks and guidelines for accreditation and established mechanisms by which quality research and teaching can be identified. Within the higher education sector, there is much that has been achieved and recognised as leading practice: the early initiative of administering national student experience and graduate destinations surveys has triggered the implementation of similar practices elsewhere. The quality auditing process is well regarded and is considered effective and practical. The proposed research quality

24

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 1: Australian initiatives in quality teaching and learning in higher education

framework is attempting to avoid some of the more problematic aspects of other national systems and capture the impact dimension. The national data collection process through the Institutional Assessment Framework has evolved into its current form to improve the quality of the national data collection methods and there has been a significant increase in funding of initiatives to reward and enhance quality in teaching and learning. These significant initiatives have been achieved in collaboration with the Commonwealth, the States and higher education institutions.

25

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

SECTION 2: GLOBAL TRENDS AND QUALITY INITIATIVES IN TEACHING AND LEARNING This section begins with a brief overview of national trends and initiatives in quality assurance and accreditation of teaching and learning. Instruments and measures used at the national or regional levels are then reviewed in more detail. A review of problems and issues associated with national level indicators concludes this section.

2.1: Global trends in teaching and learning The national trends and initiatives related to teaching and learning are outlined for the following countries: United Sates of America; United Kingdom; New Zealand; Hong Kong; and Europe, with illustrative examples from the Netherlands, Sweden, Italy, Germany and Hungary. An overview of Australian trends and initiatives can be found in Section 1 of this report. As the review has been limited by the availability of documents in English, this section is illustrative rather than comprehensive.

United States of America (USA) The higher education sector in the USA is a state based system, without centralised control at the national level. The U.S. Department of Education acts mainly as a repository of federal funds, and the quality assurance of post-secondary education are delegated to the states and accreditation bodies. However, a report commissioned by the U.S. Department of Education, known as the Spelling Report suggests that a more centralised approach for accountability should be pursued, and makes a number of contentious recommendations. These include: the development of a higher education information system that collects, analyses and reports student data such as retention and graduation rates; development of more outcomes-focused accountability systems; investment in the research and development of instruments measuring the intersection of institutional resources, student characteristics, and educational value-added, that will allow benchmarking; and collection of data from assessments of adult literacy, licensure, graduate and professional examinations, to enable interstate comparisons of student learning, and the provision of financial incentives from the federal government in support of these initiatives (US Department of Education, 2006). The nation-wide initiatives include competitive grants offered by the federal government each year to support innovative projects for higher education, the student and staff surveys administered by the National Centre for Education Statistics, the voluntary National Survey of Student Engagement, and most notably, the biennial Measuring Up national report cards on higher education performance prepared by the independent National Centre for Public Policy and Higher Education. Each of these are summarised below, followed by an overview of accreditation which is the primary means of quality assurance in the U.S.A.

26

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Fund for the Improvement of Post-Secondary Education (FIPSE) The Department of Education established the FIPSE in 1973 to improve the quality and accessibility of post-secondary education by supporting ‘learning-centred’ initiatives and educational improvements. The fund’s primary activity is the Comprehensive Program – an annual competition for grants to support “innovative educational improvement projects that respond to problems of national significance” in higher education (U.S. Department of Education, 2007, 26). Projects that have the potential to improve learning outcomes as measured by student achievement and performance, and are capable of effective dissemination, will attract funds. Grants may be awarded for up to three years, and grantees must be prepared to take over the costs of sustaining the project after the federal funding period has ended. Approximately 50-60 Comprehensive Program grants were awarded in 2006, with each grant ranging from $150,000 to $600,000 over the three-year period. The U.S. Department of Education has recently announced that it has set aside US$2.5 million in the FIPSE budget for at least one competitive grant to support efforts to systematically measure, assess, and report student achievement and institutional performance at the postsecondary level (Chronicle of Higher Education, 20 June, 2007).

The National Centre for Education Statistics (NCES) The NCES collects and analyses data on behalf of the U.S. Department of Education, and has conducted several studies in relation to postsecondary education (NCES, 2007). These include the •

Baccalaureate and Beyond study, which followed a cohort of students in their last year of undergraduate studies, surveying them about their undergraduate experience and their future employment and education expectations, and, following graduation, on their job search activities, education and employment experiences;



Beginning Postsecondary Students Longitudinal Study, which asked commencing students about their transitions to postsecondary education;



Integrated Postsecondary Education Data System, which collected institution-level data on enrolments, program completions, faculty, staff and finances; and



National Study of Postsecondary Faculty, which was developed to collect data on fulland part-time faculty and instructional staff at public and private institutions.

National Survey of Student Engagement (NSSE) The NESSE survey was developed in 1998 as an alternate tool to gather information on the quality of undergraduate studies, and was a response to increasing interest in how students spend their time, and whether their post-secondary activities include those identified as beneficial to learning. This is described in more detail in section 2.2 Measuring Up: The National Report Card on Higher Education The National Center for Public Policy and Higher Education began producing report cards in 2000 on the performance of each state’s postsecondary education sector, to provide policymakers and the public with information to assess and improve each state’s postsecondary education. Measuring Up has become a biennial project and has been

27

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

successful in drawing attention to deficiencies or problems in the higher education system and fostering improvement. The most current report, Measuring Up 2006, evaluated, compared and graded states on their higher education performance in six areas: preparation for college; participation; completion; affordability; benefits; and learning, using the most recent data available (National Center for Public Policy and Higher Education, 2006). Measuring Up 2000 attracted a lot of attention when all 50 states received an “Incomplete” grade in the Learning category, which was due to a lack of comprehensive national data on collegelevel learning to compare state performance in what is claimed to be one of higher education’s highest priorities (Miller & Ewell, 2005). Since then, much has been done to collect comparable data. Measuring Up 2006 evaluated state performance in Learning, using the following indicators which are further described in section 2.2: •

Literacy levels of the state’s residents



Graduates ready for advanced practice



Performance of college graduates

State-level initiatives The quality assurance of higher education remains largely the prerogative of the state and states assume varying degrees of control over the sector. Increasing concerns with public accountability have driven states to increase their requirements for the measurement and reporting of performance. In the year following the publication of Measuring Up 2000, nine states initiated performance reporting, undoubtedly concerned by the issues that the report card raised, particularly in regards to the measurement of learning (Burke & Minassians, 2002). Indicators required in performance reporting tend to reflect current state priorities (Creech, 2000). Common indicators required in performance reports include: retention and graduation rates; enrolment rates by ethnicity, gender and age; time to degree; degree awarded by level, number and field; licensure test scores; remedial activities and their effectiveness; student transfers; job placements; and faculty and staff diversity (Burke, Minassians & Yang, 2002). Performance funding is another major state contribution to the quality assurance process. Tennessee established its Performance Funding Program in 1978 and was the first state to have a performance funding scheme for higher education. The fund rewards institutions for exemplary performance on student, academic program and institutional indicators in five-year cycles. Indicators used include: student performance on general aptitude tests and licensure examinations; student retention rates; student, alumni and employer satisfaction; graduate employment destinations (used in two-year technical colleges only); accreditation of academic programs; credit transfer and articulation policies; progress toward institutional and state goals; and annual report of how institutions have remedied weaknesses identified through the performance funding exercise (Tennessee Higher Education Commission website, 2007). The success of Tennessee’s Performance Funding Program triggered the initiation of similar schemes in other states.

28

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Accreditation Accreditation is the most prominent form of quality assurance of higher education in the United States and is required for eligibility to federal and state funding. There are six regional accrediting agencies that provide institutional accreditation. A further seven national accrediting agencies offer accreditation for particular types of institutions, such as religious colleges. Specialized accreditation agencies evaluate particular units, schools, or programs, especially those that require state licensing. Accreditation is founded on the principles of self-regulation and peer review (Shray, 2006), and the ability to exercise self-regulation through accreditation is an important part of the heritage of U.S. higher education (Ikenberry, 1997). Nevertheless, accrediting agencies are accountable to the sector, the public, and the government and must also undergo periodic external assessments, to be recognised as accreditation organisations. While the act of accreditation is a non-governmental activity, the process of recognising accreditors is not. The U.S. Department of Education requires accreditors to maintain criteria or standards in specific areas in order to be recognised as accreditation agencies. These areas include: student achievement, curricula, faculty, facilities, fiscal and administrative capacity, student support services, recruiting and admissions practices, measures of the program duration and objectives of degrees or credentials offered, and records of student complaints (Eaton, 2006). Recently, there have been calls for change in the accreditation system with claims that the accreditation standards are out of date and do not reflect leading quality practices (Dickeson, 2006), that quality assurance should entail more than satisfying minimal standards, and that the numerous accrediting bodies lack integration and coordination (Dickeson, 2006; Shray, 2006).

United Kingdom (UK) The United Kingdom has a highly managed, centralized higher education system, which has been described as overly costly, intrusive and bureaucratic (Harman & Meek, 2000). Although higher education institutions in the U.K. are self-governing and independent of the Government, the majority receive government funding and as a result, are subject to numerous top-down processes. Government funds are channelled through the relevant higher education funding council, which is also responsible for promoting quality in the sector. However, the existence of a funding council or intermediator body between the government of the day and the universities is argued to be an effective way of managing the relationship (Eastwood, Higher Education Summit, 2007). The Government’s emphasis on the importance of teaching and learning (Department for Education and Skills, 2003), has resulted in increasing pressure on the U.K. funding councils to act more proactively in improving teaching and learning. A number of initiatives were implemented in pursuit of this. Some initiatives are relevant only to England as Northern Ireland, Wales and Scotland have some autonomy and have established separate initiatives.

Higher Education Funding Council for England (HEFCE) The Higher Education Funding Council for England (HEFCE) is a public body of the Department for Innovation, Universities and Skills (previously the Department for

29

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Education and Skills) in the United Kingdom which distributes funding to Universities and Colleges of Higher and Further Education in England since 1992. It was created by the Further and Higher Education Act 1992. HEFCE works within a policy framework set by the Secretary of State for Innovation, Universities and Skills, but is not part of the Department. It has distinct statutory duties that are free from direct political control. HEFCE distributes public money for teaching and research to universities and colleges. In doing so, it aims to promote high quality education and research, within a financially healthy sector. The Council also plays a key role in ensuring accountability and promoting good practice. The Scottish and Welsh funding councils operate as intermediary bodies between the relevant government departments and the higher education sector in a similar way to HEFCE.

In addition to distributing both teaching and research funding to higher education institutions HEFCE is also involved with: widening participation; developing links between higher education institutions and business and the community; and enhancing leadership, governance and management within the sector. It provides both a contribution to core university funding as well as funding for special initiatives, projects and strategic aims. HEFCE has specific responsibly for: • Distributing money to universities and colleges for higher education teaching, research and related activities • Funding program to support the development of higher education • Monitoring the financial and managerial health of universities and colleges • Ensuring the quality of teaching is assessed • Providing money to further education colleges for their higher education programs • Providing guidance on good practice. HEFCE collects information on teaching and learning, research, funding, governance indicators at the institutional level and also analyses this data to produce sector wide reports. The performance indicators on teaching and learning, defined by HEFCE (1999/66) include:

Institutional teaching and learning indicators 1. 2. 3. 4. 5.

Participation of young full-time (FT) students from specified social classes Participation of young FT students from less affluent neighbourhoods Participation of young FT students from state schools Participation of students without HE qualifications Participation of students without HE qualifications from less affluent neighbourhoods 6. Progression of FT first degree entrants to second year of study 7. Resumption of studies of FT first degree entrants after a year of inactivity 8. Learning outcomes of FT first degree students 9. Learning efficiency of FT first degree students 10. Module completion for PT undergraduate students 11. Qualifiers seeking employment

30

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Sector teaching and learning indicators 1. 2. 3. 4. 5. 6. 7.

Participation of young people in HE by neighbourhood type Progression of FT first degree entrants to second year of study Resumption of studies of FT first degree entrants after a year of inactivity Learning outcomes of FT first degree students Learning efficiency of FT first degree students Cost per graduate Qualifiers seeking employment

These have recently been reviewed and a series of recommendations recommend changes to some of these performance indicators and adding some new indciators. (HEFCE, June 2007/14).

Higher Education Academy (HEA) In 2004, the Higher Education Academy was established by the U.K. funding councils to support activities in relation to the improvement of student learning, professional development and recognition of academic staff. The HEA was initially formed through a merger of the Institute for Learning and Teaching in Higher Education (ILTHE), the Learning and Teaching Support Network (LTSN), and the TQEF National Co-ordination Team (NCT). The Academy’s key contributions include the: •

development of the Professional Standards Framework, which articulates six areas of activities, core knowledge, and professional values expected of academic staff. It is intended that institutions apply this framework to their induction and professional development programmes and activities, as a means of demonstrating that professional standards for teaching and supporting learning are met;



accreditation of teaching and learning programmes, which is evaluated against the Professional Standards Framework;



introduction of a new Professional Recognition Scheme where depending on their role and achievement in teaching and supporting learning (and completing a HEA accredited program); staff achieve an Associate, Fellow, or Senior Fellow status;



recognition and reward of teaching excellence through the National Teaching Fellowship Scheme (NTFS) where individuals who have made an outstanding impact on student learning are recognised; and



provision of discipline-based support through their Subject Network of 24 Subject Centres. Each subject centre engages in a wide range of activities such as organising events, running projects, and providing information and resources, to support practitioners, departments, and discipline communities.

Centres of Excellence for Teaching and Learning (CETL) The Higher Education Funding Council for England (HEFCE) has also funded 74 Centres of Excellence for Teaching and Learning (CETLs) since 2005, where departments, disciplines or organisational units bid for funding in a two stage process. At stage 1, applicants are required to state their case for excellence in their specified area and submit evidence in support of their claim. Those who are approved at stage 1 then submit

31

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

a detailed plan of how they aim to develop their area of excellence. This is HEFCE’s largest single funding initiative in teaching and learning; each CELT receives between 1 – 2.5 million pounds over a five year period. The CELTs are based in the institutions and are funded and managed separately from the Higher Education Academy (HEFCE, 2005).

Quality Assurance Agency (QAA) The funding councils are required by law to ensure that government funds are used appropriately and that the quality of education they fund is assessed. As such, the QAA was formed in 1997 to carry out institutional audits on universities and colleges. In addition to conducting external reviews, QAA also advises the government on applications for degree awarding powers and university title; offers advice on academic standards and quality; and describes good practice and academic standards through its development of the Academic Infrastructure, a set of nationally agreed reference points that include: •

frameworks for higher education qualifications – description of competencies expected at the achievement of major qualifications (e.g., bachelors, masters, doctorates);



subject benchmark statements – standards that are expected in subject areas (e.g., history, medicine, engineering);



programme specifications – information provided by each institution about the detail and nature of its programmes;



Code of Practice – a guideline on good practice in terms of managing academic standards and quality within institutions; and



progress files – designed to assist students in monitoring, building and reflecting upon their personal development, and consists of a transcript, a personal development plan, and individual student records.

QAA audits involve an institutional self-evaluation, an external review visit, and publication of the review team’s findings. Prior to 2003, QAA carried out subject reviews which covered the full breadth of teaching and learning activities within the discipline, including the observation of classroom practices; methods of student assessment; students’ work and achievements; curriculum organisation; staff development; resource provision; and student support and guidance. These reviews were very unpopular and were finally abolished to reduce the amount of external scrutiny and burden on institutions and academics, and to recognise their autonomy. It is intended that with institutional audits, the responsibility of maintaining quality and standards is handed back to institutions (QAA, 2007). Summaries of external reviews are accessed on the government’s Teaching Quality Information (TQI) website (www.tqi.ac.uk) which was established to provide students and other stakeholders with accurate information on institutions. The website also contains results from the annual National Student Survey which was initiated in 2005 as a means of garnering students’ perspectives on teaching quality within institutions and most closely resembles the Australian CEQ. The National Student Survey is described in more detail in section 2.2. 32

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

New Zealand Higher education institutions in New Zealand have the benefit of a high degree of autonomy and academic freedom relative to other nations, although that is expected to change with the Government’s introduction of a quality assurance and monitoring system to be implemented by 2009. Currently, the New Zealand Vice-Chancellors’ Committee (NZVCC) is responsible for the quality assurance of the university sector, and has both auditing and accrediting functions. Its Academic Audit Unit conducts institutional audits, and its Committee on University Academic Programmes provide course approval and accreditation for universities offering new qualifications or making substantial changes to existing qualifications (NZVCC, 2007). The Government’s involvement in the quality processes of universities is predicted to intensify as the development of its new quality assurance and monitoring system for publicly funded institutions, is to be implemented in 2009, in addition to the quality mechanisms already in place. The new system is expected to direct the sector’s attention to government objectives, ensure that outcomes are being realised, support a continuous improvement culture in the sector, provide public assurance that government funding is well-spent, and inform the government’s investment decisions, strategies and priorities (New Zealand Cabinet Office, 2006). This new system will supersede the Government’s initial plans to introduce a performance funding element to reward institutions on the basis of successful course completion rates, course retention rates, and results of a student opinion survey. However, performance indicators continue to be an essential component of the new system which will focus on the measurement of outcomes, particularly graduate employment outcomes, as a way of motivating higher education institutions to improve student achievement. Successful course completion rates and course retention rates will be incorporated as other measures of performance. The utility of including a student opinion survey as a measure of performance will be considered at a later date. Under the new system, all publicly-funded institutions will be required to undertake selfassessments that will be focused on the outcomes sought, key processes necessary to achieve these outcomes, provide evidence of student and institutional achievement of these outcomes, and compliance with legislative and regulatory requirements. In addition to investing in these new reforms, the Government also allocated funding for the establishment of Ako Aotearoa, the National Centre for Tertiary Teaching Excellence, to promote and support effective teaching and learning across the sector, beginning in 2007. The centre will support research and inquiry into teaching, support efforts to enhance and improve teaching and learning, provide policy advice, act as an information repository and resource for the support of effective teaching, build and maintain networks to spread individuals’ and organisations’ best practice, and continue running and improving the Tertiary Teaching Excellence Awards which rewards individuals with student-centred teaching practices and who are proactive in their professional development (Ako Aotearoa, 2007).

33

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Hong Kong Higher education institutions in Hong Kong exist as autonomous entities with substantial freedom to determine curricula and academic standards, the selection of staff and students, and the allocation of resources. However, reliance upon public funds means that the government retains a strong interest in the performance and operations of the university sector. The University Grants Committee (UGC) is responsible for advising the government on issues of academic development, and decisions about the allocation of funding to Hong Kong’s eight publicly funded higher education institutions. It also plays a key role in the quality assurance of Hong Kong’s higher education sector and is therefore similar to the HEFCE in the United Kingdom in the roles it performs. Publicly funded higher education institutions each receive a recurrent grant to cover costs spent on academic and related activities carried out in response to the government’s public policy objectives. Sixty-eight percent of the recurrent grant is specifically allocated to fund the institution’s teaching function, reflecting the government’s view that teaching is the key function of higher education institutions (UGC, 2007a). Both forms of grants are not performance-based but largely dependent upon student numbers. Institutions can use their received funds with discretion but they must regularly report to the UGC on how the grants have been used. Funding in support of teaching occurs in two forms. •

A block grant that is primarily based on student numbers, and is provided to assist with teaching costs.



A Teaching Development Grant (TDGs), which have been provided to institutions on a triennial basis since 1994/95, to encourage more innovative teaching methods and more effective learning environments.

Of all UGC activities, the Teaching and Learning Quality Process Reviews (TLQPR) have attracted the most interest in terms of the assurance and improvement of teaching and learning quality. The TLQPR began in 1995 as a means of providing formal external evaluation of internal quality assurance systems in publicly funded higher education institutions for the first time in Hong Kong (Massy, 1996). It was appraised as the “right instrument at the right time” for Hong Kong as it heightened awareness among institutional leaders and staff of the importance of teaching, which had long been overshadowed by UGC’s concurrent Research Assessment Exercises. Since its second round, TLQPR results have informed funding decisions, and in subsequent rounds, continued to focus attention on teaching and learning; assisted institutions in their efforts to improve teaching and learning quality; and enhanced the accountability of institutions (Massy & French, 1997). The most recent UGC development is the creation of the Quality Assurance Council (QAC) in April, 2007, which further augments UGC’s role in the quality assurance of higher education in Hong Kong. The Quality Assurance Council is expected to perform a similar role to that of AUQA in Australia and other comparable quality assurance bodies elsewhere in the world. It will promote quality assurance in Hong Kong’s higher education sector; conduct audits and other reviews as requested by the UGC, report on, maintain and improve the quality assurance mechanisms and quality of the offerings of

34

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

institutions; advise the UGC on quality assurance matters in the higher education sector in Hong Kong, and other related matters as requested by the grants committee; and facilitate the development and dissemination of good practices in quality assurance in higher education (UGC, 2007b).

Europe and the Bologna Declaration Europe has taken one of the boldest regional initiatives in higher education with its efforts to create a European Higher Education Area by 2010. The signing of the Bologna Declaration by the Ministers of Education from 29 European countries in 1999 was a symbolic government level commitment to cooperate to develop compatible and comparable higher education systems, based primarily on a 3+2 or a 3 cycle structure of undergraduate and graduate studies. The number of signatories to the Bologna Declaration has since grown to 46 countries. The primary objectives of the Bologna process were to promote transparency, mobility, employability, and student-centred learning within Europe (Bologna Declaration, 1999). The latest Bologna report, Trends V, envisages further work to be done after 2010, particularly in shifting the focus from government and legislative actions to the implementation of reforms within institutions. Improving graduate employability; strengthening dialogue and partnership with external stakeholders; and ensuring that the employment sector takes account of the new degree structures are also listed as key priorities for the European Higher Education Area (Crosier, Purser, & Smidt, 2007). While the Bologna Declaration was clear on the structural reforms that would characterise a European Higher Education Area, it was conspicuously vague on the quality assurance systems that would be necessary to support these reforms (Westerheijden, 2003). In 2005, European standards were established for internal and external quality assurance of higher education institutions and quality assurance agencies (European Association for Quality Assurance in Higher Education, 2005). The foci of standards in relation to the quality assurance of higher education institutions are as follows: Internal quality assurance Policy and procedures for quality assurance Approval, monitoring and periodic review of programmes and awards Student assessment Quality assurance of teaching staff Learning resources and student support Information systems Public information

External quality assurance Use of internal quality assurance procedures Development of external quality assurance procedures Criteria for decisions Processes fit for purpose Reporting Follow-up procedures Periodic reviews System-wide analyses

In the recent London communiqué, signatories to the Bologna process agreed to develop a register of quality assurance agencies that conform to these standards, in order to enhance accountability, and to build trust in quality assurance.

35

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Many countries were also concerned with how to develop descriptors and standards for their higher education degrees that would be comparable with other European nations. To address these concerns, a number of European countries came together to establish the Joint Quality Initiative (JQI), a working group dedicated to developing systems and exchanging dialogue on issues relating to the quality assurance and accreditation of higher education degrees. A primary activity of the JQI involved developing common descriptors of learning outcomes and competencies expected of students upon completing their higher education cycle (e.g., Bachelors, Masters), known as the Dublin Descriptors (JQI, 2004). The Dublin Descriptors are broadly concerned with graduate students: •

Knowledge and understanding



Application of knowledge and understanding



Ability to make informed judgements



Communication skills



Learning skills for further study

The Dublin Descriptors have been adopted as cycle descriptors by the European Higher Education Qualifications Framework to be implemented in 2007, which will be a reference framework to guide the establishment of national qualification frameworks, based on learning outcomes and other Bologna-specific objectives (Bologna Working Group on Qualifications Frameworks, 2005). In 2002, the shared descriptor approach was extended to 135 European universities in 27 European countries, who were keen to demonstrate their commitment to the Bologna process at the institutional level, by engaging in the Tuning project. The project aims to establish reference points for subject areas in terms of subject-specific and generic competencies expected of students, and expected student workload expressed in terms of credits. The project does not attempt to prescribe rigid subject specifications, but rather seeks to ensure the quality, design and delivery of study programmes by establishing guiding principles and reference points of good practice for each subject. Interestingly, in identifying generic competencies, the project consulted graduates, employers and academics and found that certain academic competencies (such as the capacity for analysis and synthesis, and the capacity to learn and to problem solve) were identified by all as being most important. Graduates and employers were also remarkably similar in their perceived importance of the capacity to apply knowledge; adapt to new situations; work autonomously and in teams; have interpersonal skills; to organize and plan; and to have quality oral and written communication skills (Gonzales & Wagenaar, 2005). To date, the Tuning project has identified generic and subject-specific competencies for Business Administration, Chemistry, Geology, Education Sciences, History, Mathematics, Physics, European Studies and Nursing. Phase two of the project which is currently underway, places emphasis on the role of both academic staff and students, focussing on student workload, approaches to learning, teaching and assessment, and the quality enhancement of degree programmes.

36

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Impact of the Bologna Declaration on national approaches The impact of the Bologna process varied for different European countries depending on the state of their existing higher education sector. Following its commencement, higher education systems in many countries underwent major reconstruction in order to conform to the two-cycle structure endorsed. Countries such as Italy and Germany saw Bologna as a major opportunity to rebuild their higher education systems which were characterised by high attrition rates, extended time-to-completion rates, and deteriorating international attractiveness (European Centre for Higher Education, 2003). Smaller countries which are more reliant on study and employment abroad have also seized upon the Europeanization movement (Haug & Tauch, 2001). Each country has made, or is making, the necessary changes to legislation in reconstructing their higher education structure. As Vanderpoorten (2003) observed, while legislation is a national process, embedded within the country’s own legal and political climate, transparency concerning the quality of higher education requires international cooperation regarding the definition of quality. Following Bologna, accreditation gained momentum as the preferable method of quality assurance in the European higher education sector. Accompanying the move to strengthen accreditation in countries has been the commitment to reduce direct government regulation and oversight and increase autonomy of the institutions. Many of the governments are still grappling with the tensions associated with this. The recent initiatives in the Netherlands, Sweden, Italy, Germany and Hungary are illustrative of the ways in which European countries are responding.

The Netherlands Public higher education institutions in the Netherlands must obtain accreditation for their study programmes every five years in order to obtain state funding, have degree awarding powers, and enable their students to be eligible for government study grants and loans. Accreditation of Dutch universities is overseen by a central accreditation organisation (NVAO). Dutch higher education institutions are free to choose various organisations to accredit their study programmes on the condition that the accreditation criteria used by the organisation is acknowledged by the NVAO as being consistent with their own. NVAO’s accreditation framework assesses degree programmes on the basis of course objectives; course programme; deployment of staff; facilities and provisions; internal quality assurance; and results. The Dutch government will introduce a new funding system at the end of 2007, where government grants are calculated according to the number of students enrolled, the number of degrees awarded, and a teaching supplement – the details of which have yet to be finalised. By allocating funding on the basis of student numbers, quality and efficiency is expected to improve as there will be increased competition among institutions to attract students.

37

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

A new Higher Education and Research Act (WHOO) was also released, which grants significantly more autonomy to higher education institutions with the promise of reduced regulation from the government, in exchange for institutions providing more transparent and publicly available information about themselves and the programmes they offer. Interestingly, the Act is very contractual in nature as it outlines the rights and obligations of students and institutions. For example, it stipulates that institutions must itemise their facilities; indicate tuition, supervision and contact hours; and time limits for correcting test papers. For students, the Act specifies that they are obliged to attend lectures and apply themselves to their studies with the repercussion of failure to those who do not meet these obligations. From 2007, students are allocated a ‘learning entitlement’ which will enable them to complete one bachelor and one master’s programme. However, there are time constraints on the entitlement. If students extend the duration of their study period, they would then be required to pay the tuition fees. Finally, the Act also states that higher education will increasingly be provided on an individual and made-to-measure basis. Teaching staff will be required to initiate one-to-one contact with students to discuss their progress, with the expectation that this will increase the commitment of both students and teachers.

Sweden Since 1990, the Swedish parliament has funded the Council for the Renewal of Higher Education which supports pedagogical development at universities and colleges; distributes grants for the development of quality and renewal of undergraduate and postgraduate education; monitors the results of projects and initiatives supported by the Council; disseminates information about research and the development of higher education; and initiates working groups on particular areas of higher education. Of particular interest is the Council’s competitive grants scheme for individual academics, which is designed to provide incentives for staff to undertake innovative teaching and learning initiatives. There is a strong emphasis on student participation in the projects and the grants scheme supports staff for between one and three years by paying for a proportion of their time to be spent on project work (Council for the Renewal of Higher Education, 2007). The National Agency for Higher Education is the central body responsible for the quality assurance of higher education institutions in Sweden. In the period 2007 – 2012, its quality assurance activities will include: •

the evaluation of subjects and programmes every six years;



audits of quality assurance procedures at higher education institutions;



thematic evaluations which focus on specific aspects of quality at the institutional level and attempt to disseminate good practices and stimulate improvements;



distinguishing centres of educational excellence; a new initiative that recognises educational organisations that “are particularly eminent and can demonstrate very high educational standards” (National Agency for Higher Education website, 2007). The scheme is open to departments and units, but not whole institutions. The application criteria require evidence of:

38

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

o o

o o o o

an organisational structure, a quality assurance system and infrastructure that function exceptionally well; the organisation run by a competent management/administration and committed teachers who have the relevant knowledge, experience and capacities; the organisation firmly underpinned by an explicit and robust academic and/or artistic foundation and/or tried and tested experience; the teaching and forms of examination designed in accordance with the contents and objectives of its programme; student learning fostered in an eminent manner; students’ achievement of exceptional results.

The National Agency also prepares and conducts national surveys of students and teachers to assess the quality of higher education. These are described in more detail in section 2.2. •

A Mirror for Students: a student engagement survey.



A Mirror for Postgraduate Students: a survey of the postgraduate students’ perceptions of their course and study experiences.



A Teacher Survey: a survey of teachers’ perceptions of their working conditions, working hours and teaching.

Italy Prior to the Bologna Declaration, the Italian higher education system was completely governed by the state, and operated within national regulations of academic programmes where even the names of courses were determined by the law. The purpose of such detailed regulation was to ensure that the same standards would be expected and applied across the country. In this context, accreditation was considered unnecessary (Finocchietti & Capucci, 2003). The Bologna process led to this being reconsidered, where the government has now devolved power to higher education institutions to define the name and educational objectives of each study programme; teaching activities; content of curriculum; and assessment. In 2001, the government established an accreditation body to inform them on funding decisions and allocations. The National Committee for the Evaluation of the University System (CNVSU) was delegated the task of carrying out the accreditation process. It was decided that rigid and detailed accreditation standards should be avoided as the sector adapted to the new system of accreditation. The accreditation of academic programmes was favoured over institutional accreditation as it was less expensive and time consuming. However, CNVSU does conduct institutional accreditation for new universities to monitor their achievement and to check whether their quality standards are acceptable (Buonaura & Nauta, 2004). In addition to its accrediting duties, the CNVSU also compiles annual external assessment plans for individual institutions or single teaching units, publishes an annual

39

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

report on the university evaluation system, and encourages innovative quality assessment procedures and practices. Student opinion is an important aspect of quality assurance in Italy. Since 1999, universities students have been invited to complete questionnaires relating to the quality of teaching experienced. The information is then sent to the government and the CNVSU (Iezzi, 2005).

Germany Since the Bologna Declaration in 1999, German higher education institutions have been engaged in a process of deregulation towards greater institutional autonomy and reliance on market mechanisms. This has not been straightforward, as there are continuing tensions between the government’s desire to give direction to institutions and the institutions’ desire for full autonomy. This is illustrated by the Framework Act for Higher Education which contains top-down directives on examination regulations and teaching roles. Quality assurance in Germany is verified by a combination of evaluation and accreditation. Evaluation is designed to assess the strengths and weaknesses of institutions and degree programmes, in order to assist universities and colleges in adopting systematic quality assurance and quality enhancement strategies. The evaluation process involves an institutional review, external peer review, and the publication of evaluation results. The quality of degree programmes are now also assured through accreditation, a process which was completely new to Germany (Schade, 2003). The German Accreditation Council was established in 1998 to oversee the process. It currently acts as a coordinating body for accreditation agencies and sets the standards by which accrediting bodies assess the content and quality of degree programmes. A quality assurance approach that has attracted international attention is carried out by the Centre for Higher Education Development (CHE). CHE makes institutional comparisons possible by conducting regular surveys of students and faculty, issuing annual rankings on subjects and departmental performance, and making these accessible on their website. Students are asked about their experiences and satisfaction, and faculty are asked to give ‘insider tips’ of institutions they would recommend as the best places to study. A key indicator in the ranking system is time-to-completion, a particularly important indicator in the German context considering their lengthy study completion rates and the subsequent ‘over-aging’ of graduates. The CHE ranking system is interesting for a number of reasons. 1. The system is multidimensional and does not weigh or aggregate individual indicator scores. This recognises that an institution’s performance on one indicator should not determine its overall performance (Usher & Savino, 2007). 2. CHE does not produce league tables, but rather places universities into either a top group, middle group, or bottom group which have statistically significant differences. This ensures that minor differences produced by random fluctuations in the assessment of performance are not interpreted and exacerbated as real differences.

40

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

3. The rankings are subject-specific as CHE believes that aggregation at the institutional level is not informative for prospective students interested in studying a specific subject. 4. CHE does not weigh indicators and aggregate data. It is possible for users to create their own weightings and rankings by selecting the indicators and criteria they are interested to compare institutions on, which effectively passes the controlling power of defining ‘quality’ to the stakeholders. This may be the most significant aspect all (Usher & Savino, 2007). The Commission on the Future for Higher Education in the US recently recommended the creation of a similar type of database that will provide information, so that potential students can develop their own personal rankings based on their own criteria (US Department of Education, 2006).

Hungary Many Central and Eastern European countries began using accreditation to regulate the burgeoning private higher education sector in the early 1990s. The Hungarian Accreditation Committee (HAC) was established in 1993 to accredit higher education institutions. After the first round of institutional accreditation ended in 2000, the HAC conducted a pilot project where all of the country’s study programmes for the disciplines of History and Psychology were evaluated. The reception and results of these disciplinary reviews were very positive and HAC now has established separate procedures for the accreditation of institutions and disciplines. Institutional accreditation evaluates the standards of: •

education and training activities and conditions



research activities



facilities



staff



organisational structure



infrastructure

Discipline accreditation evaluates the standards of: •

curriculum content



proportion of practical and theory-based instruction



staff qualifications



infrastructure

Accreditation is essential for institutions and programmes to receive a government-issued operating licence. Each institution must undergo the accreditation process every 8 years and this is checked every 4 years. Results of the accreditation procedure are publicly available in the Academic Bulletin and on the HAC website.

41

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

2.2: Indicators of student experience, satisfaction and engagement The key survey instruments and measures used to identify student engagement, satisfaction at the national or sector level are briefly reviewed in this section.

National or sector-wide use of student feedback Higher education institutions have been seeking student feedback on teaching or subjects for several decades in many higher education institutions around the world. This information has primarily been used as an evaluation tool to inform teachers and subject coordinators. In contrast, the development and use of surveys to collect student feedback on their experience of their whole degree or the institution is a more recent development. The systematic use of such surveys to gather data across several institutions or the sector is only now being considered in many countries with increasing value being placed on student feedback surveys by governments and their agents as an indicator of teaching quality. The use of national student evaluation surveys for quality assurance purposes are expected to demonstrate the following features: 1. The surveys have demonstrated psychometric reliability and validity. 2. The surveys explicitly articulate a particular perspective on what constitutes ‘quality’ teaching and learning. In some cases this perspective is simply an agreed set of values about what is ‘good’ teaching; in others it is an empirically derived or theoretical perspective on teaching and learning. The underlying perspective that underpins the survey instrument has implications for how the results can be used to drive evidence-based policy development and teaching improvements. The following section briefly summarises the most common student feedback surveys in national or sector-wide use.

Australia Australia is a leader in the sector-wide use of surveys of students’ experiences of university teaching and learning with the Course Experience Questionnaire (CEQ) being administered nationally since 1993. The original survey was extended with additional scales in 2001, and is administered with a Graduate Destinations Survey, The Graduate Destination Survey has been sent to all graduating students since 1972. Some of the data derived from the CEQ and the GDS are used to determine the outcomes of the Learning and Teaching Performance Fund.

The Australian Graduate Survey (AGS). The Graduate Careers Australia (GCA) is responsible for the administration of the Australian Graduate Survey and works closely with the universities and the Department of Education, Science and Training to improve the quality of the data, data collection and response rates. The AVCC and GSA have jointly released Code of Practice and Guidelines for the administration of the CEQ, PREQ and GDS (2005). 42

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

The survey is comprised of two parts, the Graduate Destination Survey and the Course Experience Questionnaire or the Postgraduate Research Experience Questionnaire. Graduate Destination Survey Australian universities have administered the Graduate Destination Survey (GDS) since 1972, under the guidance of Graduate Careers Australia (GCA). The GDS is sent to all students who complete requirements for a degree in Australian universities. It focuses on details of current employment or study, as well as questions related to job search strategies. Traditionally, this data has been used by universities to advise both prospective and current students, and staff, about employment opportunities in different fields of education. More recently, two GDS variables, percentage of (Australian citizen/permanent resident bachelors) respondents in full-time work and full-time study, have been used in the LTPF. The GDS has recently been renamed the Australian Graduate Survey (AGS). Course Experience Questionnaire Australian universities have administered the Course Experience Questionnaire (CEQ) since 1993 with the Graduate Destinations Survey (now AGS). The CEQ was developed by Professor Paul Ramsden (Ramsden, 1991; Wilson, Lizzio, & Ramsden, 1997) as a teaching performance indicator, focusing on aspects of the classroom teaching environment which previous research had found were linked to deep and surface approaches to learning, and higher quality learning. These scales include Good Teaching; Clear Goals and Standards; Appropriate Assessment; and Appropriate Workload. The CEQ also includes an outcome scale, Generic Skills, and an “Overall Satisfaction with Course Quality” item. Arguing that the original CEQ was “ based on a theory of learning which emphasises the primary forces in the undergraduate experience as located within the classroom setting”, Griffin, Coates, McInnis, and James (2003; p.260) developed an expanded range of CEQ scales, reflecting features of contemporary higher education settings beyond classroom settings. The expanded scales focus on Student Support, Learning Resources, Course Organisation, Learning Community, Graduate Qualities, and Intellectual Motivation. Since 2002, Australian universities have been required, at a minimum, to collect graduate responses on the Good Teaching and Generic Skills scales, and the Overall Satisfaction Item. In addition, universities have the choice to also collect data using either the additional core CEQ scales, the extended scales, or a combination of both, subject to the limitation that the selected items take up no more than one page of the AGS. The CEQ is an extensively validated student feedback survey. Unlike most surveys, it is explicitly based on a well-researched theoretical model of learning. This strength is also a potential weakness. The model of learning on which the survey is based recognises that learning is a complex process and the CEQ focuses on student perceptions as a key indicator of this process. Many of the uses made of the CEQ data ignore this complexity. In addition, while student perceptions are important, they are not the only aspect of a quality teaching and learning experience. However, as there are few surveys in use that allow such clear connections to be made between student perceptions data and an 43

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

extensive and growing body of research on the student experience, it provides an important evidence based instrument to inform policy and teaching enhancement activities. The availability of fifteen years of national data provides trend data for comparison purposes that is unrivalled anywhere else in the world. Traditionally, components of the AGS, and the CEQ scales, have been intended for benchmarking teaching quality primarily at the degree level, allowing tracking over time of the quality of a specific degree, as well as benchmarking similar programmes at different institutions. The ability to benchmark has been difficult at times with relatively low response rates and the use of generic names of programs of study, however, many institutions have used the data to inform quality audits, curriculum reviews, and internal planning and funding decisions. Components of the AGS and CEQ are now used for to contribute to decisions on performance-based funding of institutions, and more recently, cognate disciplines within institutions through the Learning and Teaching Performance Fund. This use of the AGS/CEQ has prompted intense discussion within the Australian higher education sector, given concerns over differential survey practices and response rates between institutions. To address these concerns, DEST commissioned the Graduate Destination Survey Enhancement Project (Graduate Careers Australia, 2006). The broad goals of this project were to “…design and develop the processes, resources, and ideas needed to generate a new era of research into Australian student experiences and graduate outcomes” (Graduate Careers Australia, 2006; p.xxi), in order to improve the quality of responses to the surveys and confidence in their findings and usage (particularly with regards to the LTPF). The Postgraduate Research Experience Questionnaire (PREQ) The Postgraduate Research Experience Questionnaire (PREQ) was introduced in 1999 as a result of a growing recognition that the CEQ was not appropriate for the increasing number of postgraduate research students in Australia The PREQ investigates the opinions of recent graduates regarding their higher degree by research experience. It is administered up to twice a year in association with Graduate Careers Australia (GCA). The current form of the PREQ consists of 28 items and comprises the following scales: •

Supervision



Intellectual Climate



Skills Development



Infrastructure



Thesis Examination Process



Clarity of Goals and Expectations



Overall Satisfaction

The PREQ is still in development in terms of optimal data analysis methods. Many of the issues discussed in relation to the CEQ are pertinent to the PREQ.

44

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Australasian Survey of Student Engagement (AUSSE) ACER is currently collaborating with around 30 Australian and New Zealand universities to develop the Australasian Survey of Student Engagement (AUSSE). The AUSSE is based on the USA’s National Survey of Student Engagement (NSSE), and the survey processes and the survey items are currently being validated for use in Australasia. The comments on the National Survey of Student Engagement (NSSE) should be read in conjunction with this summary. The AUSSE survey instrument focuses on students’ engagement in activities and conditions which empirical research has linked with high-quality learning. The focus for the survey is the individual institutions for within institutional use, but it is intended that it will be generalisable for benchmarking purposes (ACER, 2007). Student engagement focuses on the interaction between students and the environment in which they learn. Students are responsible for their level of involvement, but the institution and staff members are responsible for fostering an environment which stimulates and encourages the students’ involvement. Student engagement data provide information on learning processes, and is considered to be one of the more reliable proxy measures of learning outcomes. It can also indicate areas in need of enhancement. The data has the potential to assist institutions make decisions about how they might support student learning and development, manage resources, monitor standards and outcomes, and monitor curriculum and services. The AUSSE has five scales that measure various aspects of student engagement: 1. Active learning (students’ efforts to actively construct their knowledge) 2. Academic challenge (the extent to which expectations and assessments challenged students to learn) 3. Student and staff interactions (the level and nature of students’ contact with teaching staff) 4. Enriching educational experiences (participation in broadening educational activities) 5. Supportive learning environment (students’ feelings of legitimation within the university community).

First Year Experience Questionnaire The First Year Experience Questionnaire (FYE) has been administered at five-year intervals since 1994 by the University of Melbourne’s Centre for the Study of Higher Education (Krause, Hartley, James, & McInnis, 2005). Surveying a stratified sample of first-year students of 7 universities in 1994 and 1999, and 9 universities in 2004, its goal is to “assemble a unique database on the changing character of first year students’ attitudes, expectations, study patterns and overall experience on campus” (Krause et al., 2005; p.1). It draws on the CEQ for much of its content. In addition, the 2004 FYE included items and scales focussing on student engagement, and the role information and communication technologies in student engagement. However, with the response rate for the 2004 survey at only 24%, concerns about the representativeness of the most recent findings.

45

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

While the FYE survey is a research instrument rather than a national or sector-wide survey, its impact has been significant as it provided the evidence base for many universities’ strategies to improve university transition and first year retention and progression. As a consequence, many Australian universities routinely gather data from first year students using a variation of the FYE survey or an adaptation of the CEQ.

England, Wales and Northern Ireland National Student Survey The National Student Survey (NSS) has been used nationally by universities in England, Wales and Northern Ireland since 2005 to assist prospective students in making choices; provide a source of data for public accountability; and assist institutions in quality enhancement activities (Sharpe, 2007). Unlike the CEQ, it is administered to students in their final year of study. The NSS drew on the CEQ for its conceptual foundation, emphasising student perceptions of the learning environment and subsequent impacts on learning outcomes. The first iteration of the NSS included 6 scales: quality of teaching; assessment and feedback; academic support; organisation and management; learning resources; personal development, in addition to an overall satisfaction item. The second iteration tested two additional scales adapted from the CEQ; learning community and intellectual motivation. In 2007, individual institutions will be able to nominate pilot scales from a subset of 10 additional scales, e.g. careers, course content/structure, workload, the physical environment. Sharpe (2007) notes that “Being based on the CEQ, the theory-base of the NSS is the same as for the CEQ, i.e., it emphasises the importance of students' perceptions of their learning context and the impact of this upon their learning outcomes.” (p.10).

Destinations of Leavers from Higher Education (DLHE) The Destinations of Leavers from Higher Education (DLHE) survey collects information on the activities of students following departure from a higher education institution and has been used since 2002/03. It replaced the First Destinations Supplement (FDS) used between 1994/95 and 2001/02. It is managed by Higher Education Statistics Agency (HESA), and is carried out approximately 6 months after students have completed their degree. The data collected and reported is similar to that of Australia’s Graduate Survey. First Year Experience Survey With a focus on retention and attrition, Yorke et al (1997) investigated the experiences of first year students and their reasons for discontinuation in six institutions in the north west of England. Extending this study, Yorke & Longden (2007) carried out a two-phase survey project investigating the experience of first year students in 25 universities in the United Kingdom, spanning a range of institution types and nine broad fields of study. A sampling frame was used which ensured no institution was asked to survey more than 3 fields of education. The first phase investigated the experiences of first year full-time students following their first semester, while the second phase (which began in January 2007) surveys ex-first year students about their reasons for discontinuing.

46

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

In the first phase, survey items (Likert format) were not designed to reflect specific scales. However, principle components analysis of the data suggested at least 5 scales with adequate psychometric properties. These were labelled: understanding the academic demand; supportive teaching; stimulating learning experience; feedback; and coping with academic work. While sub-group descriptive statistics are presented in the report, inferential statistics are not, due to concerns about the adequacy of the sampling frame and substantial variations in response rates across institutions. The exclusion of part-time first year students from the sampling frame also limits the potential conclusions that can be drawn from the survey. Similar to the Australian First Year Experience survey, the UK survey was designed and administered as a research instrument, rather than for sector-wide use.

United States of America National Survey of Student Engagement (NSSE) The NSSE survey was developed in 1998 to gather information on the quality of undergraduate studies. It was also a response to increasing interest in how students spend their time, and whether they participate in postsecondary programs and activities identified as beneficial to their learning and personal development. Survey items reflect “good practices” in which students and institutions engage and that have been empirically shown to be associated with better outcomes in college. The survey is administered once a year to participating institutions, with the cost based on the institutions’ undergraduate enrolment numbers. There are currently no requirements for institutions to administer the NSSE, however it has been administered to 1,600,000 students from 1,100 different fouryear colleges and universities to date. The institutions that participate generally reflect the national distribution of the 2005 Basic Carnegie Classifications (NSSE, 2006). The NSSE: is designed to obtain, on an annual basis, information from scores of colleges and universities nationwide about student participation in programs and activities that institutions provide for their learning and personal development. The results will provide an estimate of how undergraduates spend their time and what they gain from attending college. Survey items on The National Survey of Student Engagement represent empirically confirmed "good practices" in undergraduate education. That is, they reflect behaviors by students and institutions that are associated with desired outcomes of college (NSSE, 2007a). It is intended that institutions use their NSSE data to identify aspects of the undergraduate experience inside and outside the classroom that can be improved through changes in policies and practices more consistent with good practice in undergraduate education. It is also anticipated that the NSSE information is used by prospective college students, their parents, academic advisers, institutional research officers, and researchers to learn more about how students spend their time at different colleges and universities, and what they gain from their experiences. Kuh (2001) describes four main factors underlying students’ responses to the 22 core items, representing activities in which students engage, inside and outside the classroom: student-faculty activities, student-student activities, activities reflecting

47

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

diversity, and class-work activities. Three factors were found to underlie student responses concerning educational and personal growth: personal-social, practical competence, and general education. Three factors were found to underlie responses to items tracking opinions about the school: quality of relations, the social climate of campus, and the academic quality of the campus. Several other surveys have been designed within the same theoretical framework as the NSSE. These include the Beginning College National Survey of Student Engagement (BCSSE), which measures “entering first-year students’ pre-college academic and cocurricular experiences, as well as their interest in and expectations for participating in educationally purposeful activities during college” (NSSE, 2007b); the LSSSE, for students of Law schools; the HSSE, for High School students; and the CCSSE, for students of community colleges. There is a steadily increasing amount of research related to the measurement properties and institutional use of the above Student Engagement surveys (see NSSE, 2007c for a current list); with the theoretical and empirical connections between the student engagement and student learning (e.g. Ramsden, 1991) continuing to be explored. For example, weak links have been established between student engagement and critical thinking and grades, indicating that there are factors yet to be identified that impact on the quality of student learning (Carini et al, 2006). The College Student Experience Questionnaire (CSEQ) The NSSE was developed from the College Student Experience Questionnaire (CSEQ). The CSEQ is in its fourth edition and is also a voluntary survey that is conducted by the Center for Postsecondary Research at Indiana University, Bloomington. The CSEQ recognises that the more engaged students are, the better their learning outcomes and development. The survey assesses student effort in using institutional resources and opportunities designed for their learning and development; student perspectives on the priorities and emphases of the campus environment; student self-reported progress toward a range of educational outcomes (general education; personal development; science and technology; intellectual skills; and practical and vocational competence); and other background information such as demographics, grades and employment status. Institutions are also invited to choose to add up to twenty more additional questions to the survey that is of special interest, with five response options for each item. The survey is “one of the few national assessment instruments that inventories both the processes of learning (e.g., interactions with faculty, collaboration with peers, and writing experiences) and progress toward desired outcomes of college (intellectual skills, interpersonal competences and personal values)” (Borden, 2001). Institutions have the onus of administering the survey (in paper or online format) to its undergraduate students, and returning the data to the Center for Postsecondary Research for mean and frequency analyses and the compilation of an institutional report. Institutions can ask for specific statistical analyses of the data at an extra cost. Institutions generally use the results of the CSEQ to: • Determine program effectiveness; 48

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

• • • • • • •

Measure learning outcomes and impact of campus environments (learning communities, etc.); Assess academic year initiatives (first-year experience courses, senior portfolio projects, etc.); Merge with and complement institution-wide data for a richer understanding of student development issues; Compile accreditation data; Examine efforts of academic affairs and student affairs divisions; Assess division-wide programming and student learning; and Assess student involvement in a variety of campus initiatives.

The College Student Expectations Questionnaire (CSXQ) The Center for Postsecondary Research also publishes the second edition of the College Student Expectations Questionnaire (CSXQ) which assesses the goals and expectations of pre-college and first-year students in regards to expected nature and frequency of interaction with faculty members; involvement with peers from diverse backgrounds; use of campus learning resources and opportunities; satisfaction with college; and the nature of college learning environments. The CSXQ is administered at the beginning of firstsemester and can serve as a pre-test of student expectations. Combined with the results of the CSEQ which is administered near the end of the academic year, institutions can assess the extent to which student and institutional expectations have been met. Institutions can also use CSXQ results to inform: • Institutional research, evaluation, and assessment of the student experience; • Enrolment management, student recruitment and retention initiatives; • Faculty development, advising, and academic support services; • First-year experience programs; and • Orientation, residence life, and student activities. The CSXQ has 87 items in common with the CSEQ, bar different wording of instructions and responses are given on a scale of one to four. More than 33,000 students at approximately 39 four-year public and private institutions have completed the second edition of the CSXQ at least once since 1998.

The Cooperative Institutional Research Program (CIRP) The Cooperative Institutional Research Program, administered by the Higher Education Research Institute (HERI), is a national longitudinal study of the American higher education system. First established in 1966, the CIRP is the nation's largest and oldest empirical study of higher education, involving data on over 1,800 institutions and over 11 million students. It is regarded as the most comprehensive source of information on college students. The annual report of the CIRP Freshman Survey provides normative data on each year's entering college students. The CIRP Freshman Survey and the two CIRP follow-up surveys, Your First College Year (YFCY) and the College Senior Survey (CSS), are the only national surveys specifically

49

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

designed to evaluate students during their entire college experience, including the ability to evaluate the impact of the students’ experience and growth during their first year of college. Each year, approximately 700 two-year colleges, four-year colleges and universities subscribe to the service, with the Freshman Survey administered to over 400,000 entering students during orientation or registration. The survey covers a wide range of student characteristics: parental income and education, ethnicity, and other demographic items; financial aid; secondary school achievement and activities; educational and career plans; and values, attitudes, beliefs, and self-concept. Participating institutions receive a detailed profile of their entering cohort, as well as national normative data for students in similar types of institutions. These campus profile reports, together with the national normative profile, provide important data that can be used by the institution in a variety of program and policy areas: • Admissions and recruitment • Academic program development, review and self-assessment • Institutional self-study and accreditation activities • Public relations and advancement/development • Institutional research and assessment • Retention studies • Longitudinal research about the impact of policies and programs An important aspect of participating in the CIRP surveys is that the data is benchmarked against similar schools’ results. In addition, because the surveys are offered annually, trends reports can subsequently provide valuable data to empirically demonstrate changes in students over time. Although the normative data provided with the institutional reports (and published annually in The American Freshman) are based on the population of first-time, full-time freshmen, participating institutions also receive separate reports for their part-time and transfer students. Additionally, participating campuses can obtain supplemental reports profiling students by various subgroups (for example, by intended major or career, by academic ability, by home state). The Senior Survey (CSS), formerly the College Student Survey, is an exit survey which provides institutions with feedback on the students' academic and campus life experiences and on their post-college plans immediately following graduation. When used in conjunction with the CIRP Freshman Survey or the Your First College Year (YFCY), the CSS generates longitudinal data on students’ cognitive and affective growth during college as well as their post-college plans. The CSS has been used by institutional researchers to study the impact of service-learning, leadership development, and faculty mentoring, and to assess a wide variety of instructional practices. Annual reports are routinely provided, as are trend reports that draw on up to 40 years of data to explore a range of aspects eg. First year of college, first generation college students, retention and service learning.

50

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Canada The NSSE, described above, was used in 17 Canadian universities and colleges in the Spring of 2007.

British Columbia College and Institute Student Outcomes Survey (CISO) Graduates of British Columbia's (BC) 22 public colleges, university colleges, and institutes are contacted by telephone between 9 and 20 months after they complete their degree or program of study, or have completed a significant proportion of their study. They are invited to respond to the British Columbia College and Institute Student Outcomes (CISO) Survey. BC Stats manages the collection of student outcomes information on behalf of the Outcomes Working Group (OWG), representing the Ministry of Advanced Education (AVED). Detailed annual reports are available on the web and highlight reports are widely circulated to prospective students. Former students are asked a series of questions about their employment and further education experiences since leaving their program. They are also asked for feedback on their levels of satisfaction with various aspects of their educational experience. In addition, each year a set of special questions is added to the questionnaire, allowing for a more in-depth analysis of an area of particular interest. Response rates have varied between 49% - 77%, with a 57% response rate achieved in 2006. The CISO Survey data is used to provide information to institutions to support them in evaluating and improving programmes and services; assisting prospective students in their programme choices; and enhancing understanding of the education and labour markets. In 2001, the Survey was modified to include scales and items focusing on “learnercentred practice”. The framework used consists of 5 factors: learner and learning support services; teaching and learning processes; curriculum; campus life; and learning gains. For the first time in 2006, former students who took Baccalaureate programs at British Columbia’s public colleges and universities were not included in the CISO survey, reducing the numbers involved in the survey by 10%. Instead these students now take part in the Universities Baccalaureate Graduates Survey (UBGS) (see below).

University Baccalaureate Graduates Survey (UBGS) The Universities Baccalaureate Graduates Survey (UBGS) has been operating since 1995 and is conducted by The University President’s Council of British Columbia. The universities, together with The University Presidents' Council of British Columbia and the British Columbia Ministry of Advanced Education, collaborate and participate on the project. The survey project is funded by the Ministry of Advanced Education. Students are interviewed by telephone two years and five years after graduation using the UBGS. The reports resulting from the interviews provide information on the graduates' further education, employment and occupations, current job earnings, 51

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

financing of university education and satisfaction with their education. All six British Columbia public universities are currently participating in the project. Detailed reports are available on the web and highlight reports are widely circulated to prospective students.

British Columbia Early Leavers Survey In 2000, the Universities Student Outcomes Project conducted a survey of university students who left their studies prior to earning their degree. The survey was part of a long term research project dedicated to gathering student outcomes information for universities and the government. The University President’s Council of British Columbia and the Ministry of Advanced Education, Training and Technology established the project. It was directed by the University Outcomes Working Committee and managed by the Centre for Education Information. The survey was undertaken to explore university early leaver behaviour. The survey asked students why they attended university and why they left, what they thought of their university experience and what their educational and employment outcomes were. The survey was designed to be exploratory and so contained a mixture of quantitative and qualitative questions which were subsequently coded into categories (Conway, 2001). Four of the six public British Columbia universities provided data for the survey and all former students from 1997- 1999, defined as non-graduating, were contacted by telephone. Almost 6 000 early leavers were surveyed, a response rate of 63%.

Taiwan A standardised instrument for investigating Taiwanese students’ perceptions of their institutions’ learning environments has been developed by Huang (2006). The College and University Environments Inventory (CUES-I) consists of 7 scales: student cohesiveness, faculty-student relations, administrative support, language abilities, emotional development, library resources, and student services. Huang (in review) describes the use of the survey to explore relations between the above dimensions and student academic aspirations and satisfaction in a random sample of 12,423 students in 42 Taiwanese universities. The study reported statistically reliable relations between the above sets of variables at both the individual level and the aggregated university level. At present, the use of the CUES-I has been limited to research purposes. The scope of the field testing to date suggests the CUES-I might be useful for benchmarking in the Taiwanese higher education sector in future.

Hong Kong A survey of employers of students of three Hong Kong higher education providers (City University of Hong Kong, the Hong Kong Polytechnic University and the Vocational Training Council) was conducted in 2002 and 2003 by the Education and Manpower Bureau. The goals of the survey were to obtain the opinions of employers about full-time publicly funded sub-degree graduates, regarding graduate attributes such as language proficiency, numeracy, IT literacy, analytic and problem-solving ability, work attitude,

52

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

interpersonal and management skills, and technical job-related skills. Employers were also invited to suggest ways of improving the quality of graduates. This survey is focussed only on outcomes. It uses employer ratings rather than selfassessment by graduates. The survey therefore presents a view of student learning in terms of employability and is silent on the role of higher education in developing individuals capable of contributing as agents of social good.

Sweden The National Agency for Higher Education has conducted national surveys for students and teachers to assess the quality of higher education.

A Mirror for Students A Mirror for Students is a student engagement survey designed to determine students’ commitment and activity levels in relation to their studies. The survey was developed from research that indicated that student engagement is an important prerequisite if education is to achieve positive results (Swedish National Agency for Higher Education, 2007). The survey items were produced in collaboration with a reference group and were tested on pilot groups of students before being sent out in 2006 to students who had been attending their current higher education institution for at least two semesters (except for those specializing in fine arts). The survey will be administered again in 2007. A Mirror for Postgraduate Students A Mirror for Postgraduate Students is a survey for postgraduate students on their course and supervision experience; their study environment; relationship with staff; their time commitment to their studies and to other forms of work; and how their personal values have been affected by their studies. The survey was developed by the Swedish National Agency in collaboration with a reference group and piloted on a group of postgraduate students from various subject areas at different higher education institutions. In 2003, the survey was randomly sent to 9,800 postgraduate students with at least one year’s experience of postgraduate study who were still actively enrolled. A Teacher Survey A Teacher Survey was designed to collect information from the personal perspective of teachers, on their working conditions and their working hours and teaching. In 2002, the survey was randomly sent to teachers and researchers employed at higher education institutions in 2001 Questions asked included: •

How much work do you do every week?



What scope do you have to influence your own work situation in terms of working hours and tasks?



Do you succeed in attaining the basic goals for higher education?



What needs do you think your students have and do you manage to meet them?

53

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Italy Short Form Questionnaire (SFQ) Since 1999, universities have invited students to complete questionnaires relating to the quality of teaching experienced. The information is then sent to the government and the National Committee for the Evaluation of the University System (CNVSU). The process was initially highly decentralised with faculties acting on their own initiative. This resulted in different ways of structuring questions, managing the distribution and collection of questionnaires, processing data, spreading output information, and determining what measures had to be applied to enhance teaching performance. In 2002, the CNVSU organised an expert team to devise a teaching evaluation questionnaire, the Short Form Questionnaire (SFQ), to ensure homogenous evaluation in all Italian universities. The SFQ asks students about the: 1. structure of the degree (students’ opinions on the study load and general organisation); 2. organisation of the course (evaluation of course structure of teachers, definition of modalities and rules or examination availability of teachers to meet with students); 3. didactic activity and study (opinions on background knowledge possessed by the student, on the utility of the didactic material and the integrative didactic activities, as well as on the sustainability of the study load); 4. infrastructures (opinions about the organization of lessons, adequacy of classrooms); and 5. interest and satisfaction.

Other higher education groups: There are now several consortia of universities that represent affiliations of universities based on type of institution rather than national boundaries. Two of the most familiar of these in Australia are the AC21 and Universitas21. The AC21 has begun to explore the possibility of using a common student feedback instrument across member institutions to gather data on student experiences for benchmarking purposes. The survey being used for this is The University of Sydney’s ‘Student Course Experience Questionnaire’ (SCEQ), which is an adaptation of the CEQ for use with currently enrolled students. The SCEQ has been trialled at Nagoya University in 2005 and 2006 and there has been discussion about trialling the survey in a North American university. The SCEQ has also been used at other universities outside Australia such as Oxford in the UK. A number of Australian universities have agreements to share items and scales from their institution-wide surveys for comparative and benchmarking purposes. Institutional practices such as these, and practices and uses of student feedback at subject level is currently being investigated as part of the Carrick Institute project and will be reported later in 2007.

54

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

2.3: Indicators of student learning The key instruments and measures used to identify student learning at the national or sector level are briefly reviewed in this section.

Evidence and quality of student learning and ‘value-adding’ Developing standard ways to measure the quality of student learning has been a goal of governments, researchers and testing houses for a considerable time. The goal to measure the growth and development that takes place in schools and universities, often termed ‘value added’, is sought by governments and funding bodies as a way to demonstrate that educational institutions are providing quality education to their students. Examples of test instruments, primarily developed by independent testing houses, to identify the learning standards of students for use in higher education institutions in Australia and the United States are outlined below.

Australia The Australian Council for Educational Research (ACER) in Australia has developed a number of instruments to test student learning at the higher education level. The majority of these are designed to support admissions decisions. For example, ACER offer admissions tests for medicine and dentistry (GAMSAT, MSAT, UMAT), health professionals (HPAT), special admissions (STAT, uniTEST), law (ALSET), business (Business Select) and vocational and apprenticeship admissions test. Others instruments are designed to test general abilities, primarily for use in the institutions, for example, tertiary writing (TWA), mathematics (TEMT) and graduate skills (GSA).

uniTEST The uniTEST is an admissions test “designed to assess generic reasoning and thinking skills that underpin studies at higher education and that are needed for students to be successful at this level”, and are based in the domains of mathematics and science, and humanities and social science (ACER website). It is a 95 item multiple choice test, taken over 2.5 hours and covers 6 broad areas: dealing with information, problem solving, decision making, argumentative analysis, interpretation and socio-cultural understanding. A new DEST initiative, reported in the 2007 budget, is for a schools based project to be piloted involving 25% of final year school students completing school or state basedtertiary admissions assessments, to also complete an admissions test such as the uniTEST. This is similar in concept to the SAT tests used in the USA. Interestingly, a number of US universities and colleges are reconsidering their high reliance on standardized tests to inform their admissions decisions.

Graduate Skills Assessment (GSA) ACER was commissioned by DEST to develop a Graduate Skills Assessment test in 1999. It was designed as a test of generic skills of students as they began at university

55

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

and at the time of graduation. Through consultation with stakeholders from universities as well as external employers, 17 dimensions of generic (or ‘transferable’) and desirable skills were identified, four of which were included in the test. These are critical thinking; problem solving; interpersonal understandings; and written communications. The two parts of the test involve a multiple choice test that takes 2 hours to complete, and a writing test that takes 1 hour to complete. Although the different components attempt to capture students’ ability to comprehend, analyse and evaluate tasks they could face in the real world, as well as students’ capability to apply similar skills in different contexts (meta-skills), the GSA does not claim to measure real-world performance directly (ACER, 2001, 2002). It was intended that the results of the test could be used as a diagnostic tool at the entry level and/or as an outcome measure/criterion in terms of admission to postgraduate study or as an indicator of generic skills for employers. Part of the rationale for the development of this test was to be able to identify the learning that is ‘value- added’ by university study. To generate interest in this test, DEST offered universities the opportunity to test their students for free. The response from the Australian universities and students ranged from lukewarm to hostile. Nevertheless, the concept continues to remain attractive to governments. A number of challenges have been made on the use of GSA results as a measure of ‘value added’: •

The question that institutional quality be inferred from the GSA results, i.e. how much can be attributed to the educational experience itself?



The GSA investigates skills that are more likely to be brought from external experiences rather than those taught by universities, which challenges the notion that value added by the institution can be measured by an instrument like the GSA (Clerehan, Chanok, Moore & Prince, 2003).



The standardisation of test items (based on knowledge that all students share) does not measure the advanced skills taught by the institution (Clerehan et al., 2003).



Since the GSA tests broader issues, meta-awareness of different subjects cannot be tested (Clerehan et al., 2003).



Measuring generic skills is less sensitive to changes due to educational programs (Baird 1988, as cited in Banta & Pike, 2007)

Clerehan et al., (2003) also raised a number of concerns on the validity of the GSA. These include: •

The student sample (2000/2001) was not randomised and so it raises selection issues.



The skills measured are not suited to psychometric testing



There are issues with cultural and linguistic biases which raise concerns about equity and cultural inclusiveness.



Learning is not a simple behaviour that lends itself to standardised measurement.

56

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

While the GSA could provide one means to identify students at risk of academic problems, there are serious doubts whether institutional quality could be inferred through this test. Drawing on the experience of developing the GSA, ACER have developed other tests which attempt to measure similar aptitudes or learning.

United States of America SAT The SAT is the most widely recognised admissions test to universities in the United States. The Scholastic Achievement Test was first introduced in 1901. Initially, it consisted of a series of essays in nine subjects that constituted the entrance exams for selective colleges and universities in the northeastern United States. In 1926, the College Board commissioned the design of the SAT test to assess ability and acquired knowledge more broadly so that the exam would not depend upon the specifics of any curriculum or test bias between people from different socio-economic backgrounds. In 1941, after considerable development the name was changed to the Scholastic Aptitude Test. In 1990, the name was changed again to Scholastic Assessment Test. Finally, in 1994, the name was changed to simply SAT (with the letters not representing any words). The SAT is the most widely used test for admission to universities. The SAT is run by the College Board, a not-for-profit association composed of more than 4,700 schools, colleges, universities, and other educational organizations. Each year, the College Board serves over three million students and their parents, 23,000 high schools, and 3,500 colleges through major programs and services in college admissions, guidance, assessment, financial aid, enrollment, and teaching and learning. Among its best-known programs are the SAT, the PSAT/NMSQT®, and the Advanced Placement Program® (AP®). The College Board contracts the Educational Testing Service (ETS) to help develop and administer the test (College Board, 2007) The SAT claims to measure critical thinking skills that are needed for academic success in college. The SAT consists of three major sections: mathematics, critical reading and writing. There are 10 sub-sections, including an experimental section that may be in any of the three major sections. The experimental section is used to normalise questions for future administrations of the SAT and does not count toward the final score. The test contains 3 hours and 45 minutes of actual timed sections, but in reality runs over about 5 hours with administration and orientation. Over time, there have been a number of recalibrations of the SAT scores and changes to the tests, with the last major change made in 2005. The SAT test is administered seven times each year, and is surrounded by an extensive industry around the support and preparation of the students for the test.

Standardised measures of learning As a consequence of the Measuring up 2000 report, which scored all institutions zero on Learning, states undertook initiatives to administer standardised measures of learning.

57

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

State based performances were reported in the Measuring Up 2006 report and included literacy, licensure and generic tests.

Literacy Levels of the State’s Residents The State Assessment of Adult Literacy (SAAL) was administered to adults with an associate’s or a bachelor’s degree in participating states (Kentucky, Maryland, Massachusetts, Missouri, and New York) in 2003. The SAAL poses real-world tests or problems, and respondents are tested on their prose, document, and quantitative literacy skills. They are required to read and interpret texts (prose), to obtain or act on information contained in tabular and graphic displays (document), and to understand numbers and graphs, and perform calculations (quantitative). In 2003, participating states administered the SAAL in conjunction with the National Assessment of Adult Literacy (NAAL), which is an assessment of the same literacy skills. Graduates Ready for Advanced Study or Practice Graduates from two-year and four-year colleges demonstrated their readiness for professional practice or advanced study by: •

taking and passing a national examination required to enter a licensed profession (e.g., nursing or physical therapy) or taking and passing a teacher licensure examination in the state in which they graduated.



taking a nationally recognized graduate admissions exam like the Graduate Record Examination (GRE) or the Medical College Admissions Test (MCAT) and earning a nationally competitive score. The GRE is similar to the SAT test in that it is used to inform graduate school admission and financial aid decisions. There are two GRE tests. The General test is designed to measures reasoning, critical thinking and the ability to communicate effectively in writing. The Subject test is designed to measure discipline specific content knowledge.

Measuring Up 2006 was the first edition in the series to have data for all 50 states on the extent to which graduates were prepared for the workforce.

Learning Performance of College Graduates Separate measures were used to assess two-year and four-year institutions. Two-year graduates were assessed with the ACT WorkKeys assessment, which examines what students can do with what they know. For example, the WorkKeys writing assessment requires students to prepare an original essay in a business setting. The Collegiate Learning Assessment (CLA) is used to test four-year graduates on their critical thinking, analytical reasoning; and written communication. The assessment poses real-world tasks that a student is asked to understand and solve. For example, respondents must complete a “real-life” activity, such as reviewing and evaluating a number of documents to prepare a written presentation. The test also evaluates students’ ability to articulate ideas, make and support judgements, sustain a coherent discussion, and use standard written English.

58

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

A more detailed review of the Collegiate Learning Assessment is provided below to illustrate the issues that are raised about the use of generic and standarised tests of university students’ learning.

The Collegiate Learning Assessment (CLA): measuring the skills and knowledge of university students across institutions The Collegiate Learning Assessment (CLA) is an outgrowth of RAND’s Value Added Assessment Initiative (VAAI) and developed by the RAND Corporation’s Council for Aid to Education. The CLA is part of a longitudinal study of student learning, embedded in the current debate about the need for more accountability, quality improvement and transparency of higher education outcomes in the USA (Hersh, 2006). The CLA approach focuses on the institution and how well it contributes to student development and learning. Students are tested by simulating complex situations (eg from the workplace) that every successful graduate may face. More precisely, the CLA measures students’ critical thinking, analytic reasoning, problem solving and written communication skills which are assumed to cut across different subjects and are generally aligned with institutional missions. The online test is annually administered to a sample of first-year and senior students (cross-sectional measure), as well as on a longitudinal basis (three times throughout a student’s college career). Since its inception in 2002, around 235 colleges have participated (CAE, n.d.). The CLA is different from other approaches to student learning assessment in that it: •

Claims to be a direct measure of student learning rather than a proxy measure.



Focuses on general education skills, not on discipline-specific content.



Uses a “matrix-sampling” approach, i.e. students’ abilities are assessed on a group level.



Claims to assess the “value-added” or the institutional contribution to student learning through “deviation scores” (how well students perform compared to similarly situated students based on their SAT and ACT scores) and “difference scores” (improvement of skills measured by pre-test/post-test models; ability difference between freshmen and seniors) (Benjamin & Chun, 2003; CAE, n.d.)

The CLA approach reflects the currently popular call for ‘direct measures’ of student learning, as it aims to measure directly what students know, through open-ended tasks. However CLA’s claim that it can directly measure student learning is not without criticism, with concerns that they: •

exclude a number of learning areas such as general intellectual ability (Klein, Shavelson, Benjamin & Bolus, 2007)



cannot control for errors caused by vagaries such as test situation or the student’s frame of mind (Kuh, 2006);

59

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning



cannot be used to identify why students do better or worse and how an institution could improve learning and performance;



do not measure how learning relates to real-world performance (AASCU, 2006);



do not measure the specific and in depth learning that has taken place in the discipline of study; and



do not take into account the complete impact of the university education as this unfolds, not only during but also beyond the time of studying.

It is therefore questionable whether obtaining a direct measure of learning is feasible, and what purposes are served by such a measure.

Is the search for ‘value added’ measures of learning a solution or a problem? The issues that surround ‘value added’ measures of learning are strongly debated among researchers and practitioners. While the approach has a number of appealing features to governments, funding agencies and testing houses, there exists great concern about the reliability, and more particularly, the validity of these tests. One issue concerns the way value added is measured. For example, the CLA approach assumes that the relationship between CLA scores and SAT scores in the student sample is representative for all students in a particular institution, which will in turn be representative for a national sample Klein et al. (2007). It is also assumed that selection bias is controlled for in the CLA; yet this demands randomisation of the sample (Braun, 2005). This is not the case, as institutions select participating students. This has implications for the statistical models underlying value added assessment which are traditionally used in settings where randomised experiments are the norm (Braun, 2005). Another problem relates to pretest/posttest differences, which have been shown to be negatively correlated with entering scores; i.e. students with low entry scores are generally gaining more than students with high entry scores (Banta & Pike, 2007, citing Thorndike, 1966). The reliability of difference scores is also very low. As compelling as the concept of measuring student growth and development in college, or value added, may be, research does not support the use of standardised tests for this purpose. (Banta & Pike, 2007, 14) The use of tests to measure general skills rather than discipline knowledge and skills has led to further criticism. Baird (1988, as cited in Banta & Pike, 2007) observed that “the more tests assess general characteristics, the less sensitive they are to change due to educational programs”. He suggests that measuring discipline-specific knowledge could address this problem. Testing content or procedural knowledge and understanding in particular academic disciplines is more accurate and likely to inform development and growth (Banta & Pike, 2006; Dwyer, Millet & Payne, 2006; Klein et al., 2007). CLA administrators recognise these problems and stress that the test scores are only one source of information among many others, and agree that it should not be the primary and sole basis on which decision-making or interpretations of learning is based

60

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

(Landgraf, 2005). However, the existence of such scores means that they are used in national assessments and reported as a ‘reliable’, ‘independent’, and ‘direct’ measure of the quality of student learning. This can privilege generic test measures over the institutions’ assigned grades, which are the most direct measures of student learning. These generic test scores can also become a source of data in ranking tables, where they may not be analysed in detail, and their meaning interpreted without caution. While finding solutions to account for the ways in which institutions contribute to the development and learning of their students will continue to interest researchers and governments alike, there is still no clear solution. It cannot be assumed that an institution or a program is the sole cause for improved learning, as so many other factors are involved (Klein, et al., 2007). Drawing conclusions from individual test scores on the value of the whole institution does not seem to be feasible, nor does it serve the purpose of improving student learning (Kuh, 2006). While debate continues to surround the feasibility of testing the learning of university students, transnational testing of school students has been carried out over 32 countries through the Organisation for Economic Co-operation and Development (OECD).

Testing the learning of 15 year old students - the PISA Study The Programme for International Student Assessment (PISA) is a transnational collaborative effort to measure and assess 15 year old students’ competencies in mathematics, science, reading and problem solving. Carried out by the OECD, the study attempts to compare the performances of students approaching the end of compulsory schooling in 32 countries (including Australia). PISA commenced in 2000 and is conducted on a three-year cycle, with each cycle concentrating on one aspect of competence: science, mathematics, reading and problem-solving (Goldstein, 2004; OECD, 2003; OECD, 2006a). The assessment takes place under test conditions in schools and involves written tasks (Kirsch, Jong, Lafontaine et al., 2002). The underlying rationale is to find out how well students at the age of 15 are prepared for the challenges of life after school and the demands of living in a knowledge society. PISA’s approach is to assess knowledge, skills and attitudes that go beyond the schoolbased approach, looking at the use of knowledge in everyday life. The emphasis is on identifying the ability to apply knowledge gained at school to non-school environments (OECD, 2006a). PISA is built upon a comprehensive framework, which draws heavily on theory, domain organisation and methods of assessment (Kirsch et al., 2002). The tasks themselves reflect ‘key competencies’, such as ‘acting autonomously’, ‘using tools interactively’ and ‘functioning in heterogeneous groups’; which are considered to be necessary prerequisites for a successful life and a well-functioning society (OECD, 2005). PISA claims to “monitor the development of national education systems by looking closely at outcomes over time” (Kirsch et al., 2002, p. 13), and to provide information for parents, students, the public and managers of educational systems on the quality and standard of skills and knowledge students acquire (Kirsch et al., 2002). The use of subscales is intended to account for differences in students’ background, learning

61

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

approaches, school climate, resources and policies. PISA provides a measure of the chances of student success through measuring performance against the socio-economic background (OECD, 2003). The underlying assumption is that educational systems can be compared, although this has been questioned by a number of researchers (Bonnet, 2002; Goldstein, 2004). For example, it is claimed that the lack of longitudinal data and the use of a single set of test instruments, which are translated in all languages, results in problems as this does not adequately account for social and cultural differences: It needs to be recognized that the reality of comparing countries is a complex multidimensional issue, well beyond the somewhat ineffectual attempt by PISA to produce subscales. With such recognition, however, it becomes difficult to promote simple country rankings which appear to be what are demanded by policy-makers. (Goldstein, 2004, p.328) Concerned about the potential of cultural bias, Goldstein (2004, p.329) has suggested five requirements that international surveys such as PISA need to meet: •

Recognise cultural specificity within the test questions and subsequent analysis.



Statistical models used in the analysis must be realistically complex to retain country differences rather than eliminate them.



Account for the multilevel nature of such comparisons by comparing countries on the basis of their variability.



Comparative studies should move towards becoming longitudinal.



Studies like this should not primarily be viewed as a vehicle for ranking but rather as a way of exploring differences in terms of cultures, curricula and school organisation.

Further caution is made on the use of test information, with Bonnet (2002) arguing that results should not be seen by policy makers as a competition between institutions with winners and losers. Nor should it result in complacency, or decisions made in haste that are based on (mostly) small standard deviations in league tables. Instead the information should be seen as providing opportunities for understanding differences and identifying areas where specific policies for improvement might be targeted (Goldstein, 2004)

Implications of the PISA study for higher education PISA is considered to be the most developed global tool to assess students’ skills and knowledge in the secondary school sector. Several questions draw out the implications of international comparison or ranking of higher education institutions based on their educational quality. Is it possible to compare culturally different educational systems? Just as this question has been raised on the PISA study, universities in different countries are characterised by their unique culture, with universities in some countries part of a long tradition of education, while for others these are a more recent initiative. Just as there are significant differences across countries, there are significant differences

62

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

within each country, as each has its unique history, values, mission, funding base, and student and staff body that will vary in much greater ways than is found in schools. Universities do not have common curricula, disciplines, national assessment systems, defined bodies of students or accredited teaching standards so the issue of accounting for variation becomes even more complex and multilayered in a university context. Can educational quality be measured using macro indicators? There has been growing criticism of the PISA study in comparing student learning outcomes, using a single set of macro indicators; expressing results in the form of quantitative league tables; and suggesting that results represent educational quality. The assumption that the chosen indicators are the same for every country neglects the variation of educational systems and institutions (Bonnet, 2002). Goldstein (2004) suggests that emphasis should be placed on potentially fruitful differences rather than on levelling educational systems. This implies that if international comparisons of systems and institutions in higher education do take place, it should be conducted with the objective of learning from the plurality of approaches to learning and teaching. Can data about student learning outcomes be used to validly infer anything about the quality of education? Is there a valid uni-dimensional relationship between educational quality and student learning outcomes? These questions draw a variety of responses from policy makers, practitioners and scholars. It seems self-evident that high-quality education, i.e. education that produces graduates who demonstrate high levels of achievement, is evidence of quality learning. But how much of this learning success can and should be attributed to an institution, the system and the individual? Since learning takes place over a whole lifespan and is not exclusively attributed to formal education, there is a need for careful consideration of the measures used to assess educational quality. Institutions may provide high quality education through resources and teachers, yet learning outcomes also depend on the students. For example, differences in student learning outcomes may appear when institutions with differing numbers of students from disadvantaged backgrounds are compared. These questions highlight difficulties and issues that would be raised with any moves to introduce common measures across institutions or national boundaries. This does not imply that we cannot relate learning outcomes to educational quality. What is important is to include educational diversity within these measures. Simply ranking institutions by a number of indicators, without controlling for specific circumstances, leads to unfair comparison, and can result in lower funding and impact on the institutions’ ability to attract high quality staff and students. Assessment of education should therefore be multi-dimensional, and not limited to the results of tests (Filinov & Ruchkina, 2002). Comparing higher education institutions of similar type, with similar programmes, systems, missions and funding would yield potentially fruitful comparisons, when the above considerations are taken into account. The main goal should be the improvement of learning, teaching and education through learning from differences rather than creating artificial measures and competition with standardised measures and outcomes.

63

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Other examples of assessment of school students’ learning There are other national and international comparative surveys of achievement in use at the school-level. The major international studies are the Trends in International Mathematics and Science Study (TIMSS), which compares data on mathematics and science achievement of students in the United States with data from other countries in a four year cycle, and the Trends Progress in International Reading Literacy Study (PIRLS), which compares the reading literacy of year four students in forty countries (2006) in a five year cycle. Both are coordinated by the International Association for the Evaluation of Educational Achievement (IEA). Other state-wide examples are the Adequate Yearly Progress (AYP) in California and the Texas Assessment of Academic Skills (TAAS). There is a general attempt to account for differences or even make differences part of the study, such as in TIMSS, where one of the objectives is to “compare science teaching practices between countries and identify similarities and differences in lesson features across countries” (NCES, 2006). Nevertheless, all of these studies face the problem of collecting valid, comparable meaning from scores of tests that have been administered across diverse cultures, educational systems and different schools (Goldstein, 2004). Criticism regarding standardised testing is voiced in terms of objectivity and equity of tests. Strong claims are made that draw on a significant body of evidence suggesting that comparative testing has actually increased the achievement gap between equity groups. Hursh (2005) claims that this is a result of directing teachers’ and administrators’ attention to meeting national standards, rather than improving the system or institution. There is growing criticism of the impact of standardisation on schools, teachers and students in the United States, where there has been significant reform in the schools sector on access, curriculum and testing. A significant body of evidence is emerging that the reforms have not achieved their goals; have contributed to a decline of overall student learning outcomes; have resulted in new inequalities and exacerbated historic inequalities; and are accompanied by evidence of demoralised teachers and disenfranchised local communities (Hursh, 2005, Lipman, 2004; McNeil, 2005; Ravitch, 1995: Valenzuela, 2004). The evolution of the accountability and standards movement in primary and secondary education, which is characterised by the multiplicity of international tests that have been developed in recent years, seems likely to spill over to the higher education system. The current calls from governments and accrediting agencies for indicators and measures of learning, is a clear signal of interest in pursuing this direction. However, there are well founded concerns that this will lead to negative outcomes for students and institutions. This is particularly where there is a conflict between the uses of the measures of student learning. Where the information is used to improve the students’ experience and quality of learning, it has had a positive impact. Where it is used to satisfy national reporting goals and increase rankings, it has had a negative impact. National and international indicators and measures can be valuable, if the main goal is improvement of teaching and learning at the institutional level, rather than meeting external accountability obligations and maximising scores.

64

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Ranking in higher education Ranking universities in terms of their educational quality is not new; but it has gained increased interest among institutions, the public, governments and other external stakeholders. Rankings are most commonly directed at serving the interests of consumers for easily interpretable information on the standing of higher education institutions. Shrinking resource allocation, competition among universities, higher tuition fees and an intensified call for educational quality and efficiency are among the most current claims in support of university rankings. There is a considerable variety of ranking systems which invariably attract controversy and debates about methodology, objectivity, impact and validity. These issues are currently being vigorously debated in the Chronicle of Higher Education following the release of the 2007 US News & World Report rankings (eg Chronicle, v53/i38, 2007). The US News & World Report is an example of a highly influential media national league table which published its first rankings of US colleges and universities in 1983. Among the most popular and best-known international rankings of universities are media rankings, such as the ‘Times Higher Education Supplement’ (THES) and rankings conducted by academic groups, such as the Shanghai Jiao Tong University (SJT) Ranking of World Universities, the ‘Champions League’ published by the Swiss Centre for Science and Technology Studies, and the ‘Melbourne Institute Index’ (International Standing of Australian Universities). The following table provides a brief overview of these rankings, their rationale (main indicators) and the weightings given to each indicator: Rankings Times Higher Education Supplement Shanghai Jiao Tong

Main Indicators • • • • •

International reputation (50%) Research impact (20%) Teaching quality (student-faculty ratio) (10%) Research performance (80%) Quality of Education (alumni Noble Prize and field medal winners) (10%) • Size of Institution (10%) Champions • Research publication performance (size, influence, League concentration, impact) (100%) Melbourne • Research (40%) Institute Index • Quality of (under-)graduate programs, student intake and resources (41%) International rankings and individual weightings (see Liu & Cheng, 2005; Williams & Van Dyke, 2004; Moodie, 2005; Swiss Confederation, 2006)

It is often argued, that rankings (or ‘league tables’ / ‘score cards’) are useful as they provide more transparency for the public and other stakeholders, and support universities in terms of benchmarking and strategic planning. Yet, there is great concern as to whether these rankings validly measure academic and educational quality.

65

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

In the current state of affairs in measuring the quality of university research and education, we are limited largely to subjective judgements about quality, buttressed with quantitative data that relate to but do not directly measure quality. (Vaughn, 2002, p.440) A simple examination of the main indicators shows that there is token use of simple measures that have a questionable relationship to quality of teaching and learning. The major focus of ranking tables is strongly biased towards research performance and reputation. The weightings given to each indicator over-emphasise input and output indicators while process indicators are given little attention. Process indicators represent institutional practices which actually produce and foster desirable outcomes and are action-oriented, which means they can directly inform policy decisions in terms of improvement (Kuh, Pace & Vesper, 1997). While process indicators might be a more effective indicator, they are hard to measure and standardise and so are frequently ignored. The Times Higher Education Supplement claims to measure teaching quality by using student-faculty ratio as an indicator. Hattendorf (1996) argues that indicators such as the student-faculty ratio may be easy to collect, but they tell us nothing about the quality of teaching and learning. The Melbourne Index gives more weighting to teaching quality measures, but maintains an emphasis on input and output measures. The reliance on reputational and research indicators has a number of shortcomings for teaching and learning with empirical research showing that reputational dimensions and institutional resources have only marginal effects on excellence in teaching and learning. It is “within-college” experiences (quality of teaching, interaction with faculty and peers, level of student engagement, and the intensity of academic experiences), that determine quality in higher education (Pascarella, 2001). The correlation between research performance and undergraduate teaching is very small at best; and at worst, departments with a strong research orientation show negative correlations with teaching quality (Astin, 1996). Output indicators such as number of alumni with Nobel Prizes or field medals are highly questionable in terms of their value in measuring educational excellence. It is simply not known how much of this success can be attributed to the particular institutional experience; comparing institutions on the basis of such indicators is misleading (Pascarella, 2001). The 2007 US News and World Report calculates its rankings using scores from peer assessment, retention, faculty resources, student selectivity, financial resources, graduation rate performance and alumni giving rate. This league table is notable for the way it calculates its rankings: it frequently changes its methodology; and decisions on the selection of indicators is not based on empirical research but on the editor’s opinion (Chronicle, v53/i38, 2007). Nevertheless, despite serious concerns about the measures and calculations used, the US News remains the most highly influential league table in the country. A consequence has been that a number of universities and colleges have deliberately changed their institutional practices to specifically target the US News indicators in order to improve their scores in the rankings. More concerning is the finding

66

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

that institutions that sought to address the quality of their learning and teaching more deeply and have made demonstrable improvements, have not improved in the rankings.

Rankings and their impact on quality of teaching and learning Using league tables to present a university’s ‘rank’ seems to be an efficient and easy-touse way of informing stakeholders about the educational quality, yet there are serious doubts that quality can be expressed in terms of single, numerical and comparative indicators. More importantly, questions arise on how the audience that they claim to inform make sense of the rankings when they rarely have knowledge about the development of the rankings and how particular scores are derived (IHEP, 2007; Usher & Savino, 2007). It has often been stressed that individual rankings do not represent meaningful differences in educational quality, the difference between the 10th and 20th rank, for instance, is artificial and almost non-existent in reality (Guarino, Ridgeway, Chun & Buddin, 2005; Stella & Woodhouse, 2006; Vaughn, 2002). This may lead prospective students to look at an institution’s reputation, when it would be more informative to ask whether a university provides quality in terms of the teaching and learning environment that will support their learning aspirations (Sanoff, 2007). Rankings tend to discourage diversity. This is an important consideration as diversity has been show to have a powerful effect on student engagement and learning. There are two ways rankings can discourage diversity. 1. Firstly, compiling league tables based on generic criteria supports the levelling of otherwise fairly different institutions (in terms of mission, goals, student body). Stella & Woodhouse (2006) argue that this is “contrary to the principle of quality assurance” as institutions should be evaluated without neglecting their individual objectives. It would be more appropriate to recognise how well institutions fulfil their unique mission (Tight, 2000). “League tables impose a one-size-fits-all approach” (Usher & Savino, 2007, p.33), which does not account for differences between institutions. Codlin & Meek (2006) add that once a ranking system is in place, the institutions that rank on the top, usually the ‘sandstone’ universities, which are older and perceived to be more prestigious, set the direction for universities at lower ranks; i.e. in order to raise their rank, these institutions commonly try to adjust, or even copy the activities of the more ‘successful’ institutions which leads to institutional convergence rather than diversity. 2. Secondly, rankings tend to encourage universities to be more selective in terms of enrolment, or even restrict student intake from particular backgrounds in order to raise their position in the ranking table (Clarke, 2007; Tight, 2000). This is disadvantageous in terms of structural diversity, which has been empirically shown to improve quality learning (Hu & Kuh, 2003b; Terenzini, Cabrera, Colbeck, Bjorklund & Parente, 2001; Umbach & Kuh, 2006). Even more concerning in terms of equity and open access to higher education, is that low-income and minority students could be disadvantaged amidst the competition for high achieving students (Clarke, 2007; Meredith, 2004). 67

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

If universities are to be judged by the standards set by ranking systems and have strong incentives to conform to them, does moving in this direction take us closer to or further from true educational quality? (Guarino et al., 2005, p.149) The functionality of rankings should be questioned rigorously in terms of its encouragement to improve teaching and learning (Dill & Soo, 2005). It is questionable whether high-stake rankings encourage improvement, when the main concern is to improve the rank of one’s institution within a league table (Pascarella, 2001; Tight, 2000; Vaughn, 2002). Since rankings and associated funding usually reward the ‘elite’ group of institutions which “depend and feed upon ‘lesser’ institutions” (Tight, 2000), it is much harder for institutions at ‘lower’ ranks to enhance their teaching and learning environment (Stella & Woodhouse, 2006). There is a need to base ranking data on findings from empirical research, which have been shown to have positive effects on learning and teaching. Pascarella (2001) suggests, “although not a perfect methodology” (p.22), we should measure institutional excellence in terms of the effectiveness of educational practices and processes, i.e. to focus directly on student experiences (Dill & Soo, 2005). If rankings are really “here to stay”, as many authors have stated, there is an urgent need to rethink the methodologies of rankings currently in use. The International Rankings Expert Group (IREG), in their meeting on rankings in Berlin (May, 2006), have developed a catalogue of principles for good ranking practice. The ‘Berlin Principles on Ranking of Higher Education Institutions’ contribute to an emphasis on quality in learning and assessment by acknowledging diversity among institutions, cultures and educational systems. They place strong emphasis on methodology and weighting and the design of appropriate indicators, and take into account the consumers’ needs as well as facilitating their understanding of ranking results.

Summary of global trends and issues A pervasive trend across all of the countries reviewed is the establishment of national systems of accreditation, quality processes and audit and requirements to provide information on performance indicators. Performance indicators at the national/regional level fall into five broad categories. 1. Common institutional indicators that are required by quality audit and accreditation processes 2. Centralised collection of mandated data that may be subsequently reported in national/regional reports 3. Survey data from students on their satisfaction, engagement, learning experiences and employment 4. Tests of learning: readiness, generic, professional/graduate admissions 5. Ranking and league tables that select data from the centrally collected and publicly available information. 68

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 2: Global trends and initiatives in teaching and learning

Trends evident in higher education include: •

Higher education is now more than ever seen as an economic commodity, with increased interest in linking employment outcomes to higher education (employment and graduate destinations). This in turn has led to interest from governments and funding agencies in measuring the employability of students through measures of learning and their employment outcomes.



There has been a global trend to develop and use performance indicators at the national/sector level, as evidenced by the PISA study, the Measuring Up reports and international rankings.



There is growing interest in identifying ‘direct measures’, particularly of student learning.



There is increasing interest in performance funding based on measures and indicators.



There is a renewed interest in benchmarking at the national and regional level (e.g., the European Higher Education Area).



There is greater emphasis on quality auditing and accreditation within countries and regional groupings (e.g., Bologna, Higher Education Area, U.S.)



In European countries there are steady moves to assign greater autonomy and independence to higher education institutions with less direct involvement from governments through quality auditing and accreditation mechanisms. By way of contrast, there are calls for greater government oversight of higher education institutions in the US through the use of standardised indicators and measures.



There are concerns expressed by researchers and higher education institutions about the impact of national/sector performance indicators on the autonomy and diversity of institutions.

While there are clear trends emerging of greater oversight and desire for standardised measures of learning and effectiveness at the national level, this trend should be interpreted cautiously. The more promising measures and indicators are those that are situated in institutional practice.

69

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

SECTION 3: INDICATORS OF QUALITY TEACHING AND LEARNING The focus of this section is quality indicators of teaching and learning and issues related to their use at the institutional and national level. It concludes with an overview of three dimensions which have a demonstrable impact on the quality of student learning: student community and engagement; assessment; and institutional climate and diversity. These dimensions can be evidenced by number of indicators and measures and are briefly described to illustrate how they can be used at the institutional level to inform and enhance the quality of teaching and learning. Some of these measures and indicators may be usefully considered at the national level once they have been embedded at the institutional level. The idea of performance indicators originate in economic models of the education system as a process within a wider economic system which converts inputs, such as student enrolment, to outputs, such as graduation rates (Ramsden, 1991). Performance indicators give education statistics context, permitting comparisons between fields, over time and with commonly accepted standards. The purpose of indicator use in the higher education sector is to facilitate the evaluation and review of institutional operations, through providing evidence on the degree to which institutional teaching and learning quality objectives are being met (Bruwer, 1998; Romainville, 1999; Rowe & Lievesley, 2002; DEST, 2003). Although indicators depict trends and uncover interesting questions about the state of higher education, they do not objectively provide explanations which reflect the complexity of higher education or permit conclusions to be drawn. Multiple sources of information and indicators are required to diagnose differences and suggest solutions to improve the quality of higher education (Canadian Education Statistics Council, 2006; Munoz & Egginton, 1999; Rojo, Seco, Martinez & Malo, 2001). Without multiple sources of both quantitative and qualitative information, interpretation may be erroneous. For example, a high graduation rate may be attributed to better organised teaching with effective student supervision or poor assessment procedures (Tavenas, 2003). It is imperative that indicators should only be interpreted in light of contextual information concerning the institution’s operation (EUA publication). The term performance indicators is often used interchangeably with the terms quality indicators and performance measures. Where they have similar characteristics, they will be described as performance indicators in this section of the report.

Evaluation, analysis, interpretation and utilisation of performance indicator data for quality enhancement The use of literally hundreds of performance indicators across the higher education sector has resulted in the collection of unnecessarily large volumes of data which are largely irrelevant in informing the enhancement of teaching and learning. Some of this

70

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

information will be used for your purposes such as auditing and other quality processes, However, it is clear that with such a large amount of data that is collected for various purposes, much of it is often difficult to handle, organise and iinterpret. It is widespread practice for institutions to collect and measure large amounts of input and output data as part of an automatic, annual cycle of data collection, but to carry out little analysis or interpretation of that data. Consequently, this habitual annual practice largely results in reduced motivation for careful, consistent and complete gathering by reporters and collectors. This potentially leads to un-standardized collection processes and virtually useless data. The underlying message is that thoughtful decisions are needed on what data should be collected for interpretation, and then to establish processes and systems that will lead to it being collected with probity. It is also vital that the collected data are not overinterpreted, resulting in inaccurate reflections (Rowe & Lievesly, 2002). Professional development and training are critical, as are interpretation consultations and mentoring programs in aiding the accurate interpretation of data in terms of relevance, reliability, validity and applicability.

Performance indicators in common use Performance indicators which are most commonly used in higher education institutions are those which are most readily quantifiable and available, not because they accurately reflect the quality of education provided (Bormans, Brouwer, Int’Veld & Mertens, 1987; Bruwer, 1998; Romainville, 1999). These performance indicators generally have limited empirical support from the literature. Qualitative outcome and process measures are more informative and empirically sound, but are difficult to measure and so are utilised less. This is unfortunate because it is what happens within the institution and how students engage in and experience their studies that is more important in determining the quality of learning and teaching than input measures (Pascarella, 2001). Consistent with the reported findings, frequent use of quantitative indicators (particularly input measures) corresponds with a system which is overly removed from the objectives of higher education. For example a common indicator at the national level is retention rates. This is important from a national and institutional perspective as it indicates efficiency and social and economic benefit. But from a student’s perspective, the primary objective of attending university is not to ‘avoid dropping out’ or to ‘pass courses’, but to gain knowledge, skills and experiences in a supportive social and academic environment that provides equal opportunities (Romainville, 1999). Table 3.1 outlines illustrative indicator types currently in widespread use at the national, institutional, department and individual levels across a number of countries. It should be noted that these illustrative indicators are not necessarily good or recommended indicators; they are simply representative of the indicators in common use in the countries reviewed. They will be described in more detail in a subsequent report from the Carrick Institute project.

71

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Table 3.1: Illustrative performance indicators in widespread use Level National

Institution

Department/ Program

Individual Teacher

Student

72

Input

Output

Outcome

Resource provision Infrastructure Curriculum committees Staff qualifications /experience Student/staff ratio Enrolment rates by type of student Clear goals and standards

Graduate employment data Student progress rate Retention rate Graduation rate Research higher degree productivity rate

Graduate employment status Evaluation of teaching performance Student feedback Student acquisition of generic skills Student engagement

Enrolment rate Student/staff ratio Provision of support services Teaching experience /qualifications

Graduate employment rate Retention rate Graduation rate Citation/publication rate of research

Stakeholder satisfaction/ engagement Value of graduates Quality of research

Enrolment rate Student/staff ratio Teaching experience /qualifications Explicit learning outcomes

Retention rate Citation/publication rate of research

Stakeholder satisfaction/ engagement Value of graduates Quality of research

Teaching experience/ qualifications Explicit learning outcomes

Graduate employment rate Student progress rate Graduation rate

Student learning outcomes

Staff teaching qualifications Resource provision Class size Student background characteristics Clear student learning outcome statements

Student learning outcomes Student satisfaction Graduate skills Student engagement Student community Motivation for life-long learning

Process Appropriate balance of staff time in teaching, research, administration, consulting and community activities Active and collaborative learning Study/work environment Mission statement Academic innovation and creativity Visionary leadership Accommodation for student/staff diversity Link research to teaching Learning community Institutional climate Accommodation for student diversity Student centred approach Use of current research in informing teaching and curriculum content Specific, continuous and timely feedback Community engagement /partnership Accommodation for student diversity Student centred approach Communication skills Possession of desirable teacher characteristics Specific, continuous and timely feedback Use of current research in informing teaching and curriculum content Community engagement/partner -ship Social involvement Facilitation and valuing of diversity Diversity interactions Learner-centred environment Peer collaboration Student engagement

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Relevant and practical quality performance indicators An essential element in enhancing the quality of teaching and learning within the higher education sector as a whole is enhancing the indciators on which judgements of quality are made. It is essential that teaching and learning indicators are utilised in ways which ensure that the availability of data does not dictate the approach taken. If governments and their agents desire a more data-driven policy approach, it is crucial that valid indicators of quality teaching and learning are developed and used in order to produce practical and significant data that can be used to inform institutional decisions (Coates, 2006b, Hattie, 2005). Unless performance indicators are used in a way that can better inform educational governance and practices that generate enhancements in the quality of teaching and learning, particularly the enhancement of student learning outcomes, the measurement process and evaluations become little more than an expensive data gathering exercise that is difficult to sustain and justify (Rowe & Lievesley, 2002). The measurement of quality teaching and learning within the higher education sector should be indicators which are significant in informing individual and institutional performance, and where feasible, also significant on a common national or sector wide scale. A performance indicator considered useful is one that informs the development of strategic decision-making, resulting in measurable improvements to desired educational outcomes following implementation (Rowe & Lievesley, 2002). The quality of any given performance indicator is derived from the presence of many variants, including: 1. Validity The reliability of a performance indicator does not correspond to its validity – both content validity (including face validity and logical validity) and criterion-related validity. For example, while it is possible to have a highly reliable performance indicator that lacks validity (eg. an assessment task), a valid performance indicator that has low reliability is of little or no use. 2. Reliability Reporting of reliability (accuracy of measurement over time) as well as sources of measurement error in the formation and interpretation of performance indicator information is frequently overlooked by developers, gatherers and suppliers, regardless of their inherent responsibility to report such limitations. At the very least, communication of the administrators’ uncertainty associated with observed scores, is required to minimise the potential ‘risks’ of misinterpretation. 3. Relevance to mission and policy Judgements related to the relevance of a given performance indicator depend on the purpose for which it is gathered and how it is used to inform policy, planning practice and reform. In addition, the relevance of any performance indicator is location-specific (international countries are at various stages of development) and context-dependent in terms of policy priorities and demands for information. Performance indicator data should not be collected for their own sake but for specific policy purposes.

73

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

4. Potential for disaggregation Data is more useful if it can be disaggregated so that the information can be considered by various dimensions and at various levels. Common values of disaggregation include, gender, socio-economic status ethnic and equity groupings, type and name of education program, year level, ENTER scores, grades etc. 5. Timeliness (i.e. currency and punctuality) An important characteristic of the usefulness of performance indicators is their availability at times when key policy and planning decisions need to be made. Lack of appropriate data from specific performance indicators is likely to lead to misinformed decisions based on opinion rather than evidence. Where the relevant information for some performance indicators take longer to collect, analyse and process (eg. Student Achievement/Progress Rates), significant findings at key stages of the data collection should be reported in order to inform policy makers and planners of potential trends, as well as current performance indicator factors affecting those trends. 6 Coherence across different sources The validity and reliability of performance indicators relies on the degree to which data are collected from a variety of appropriate sources. Evidence of performance from a number of sources (e.g. student surveys, performance reviews, stakeholder opinions, enrolment and licensure data etc) are more likely to be reliable, valid and representative of the performance area. 7 Clarity and transparency with respect to known limitations Performance indicator data should be collected, analysed and interpreted in light of the methodological limitations. When reporting the results of these analyses, limitations should be clearly outlined to the reader. Decisions by the institute should be made explicit when accounting for such limitations. 8 Accessibility and affordability (i.e. cost effectiveness) Decisions concerning the costs involved in measuring performance indicators must be balanced against considerations of their utility in informing policy, planning and reform. For example, when measuring performance indicators about student achievement outcomes, the cost and feasibility of obtaining estimates from full-cohort or population data collections may be unjustifiable, compared with those obtained from appropriately designed representative samples. 9 Comparability through adherence to internationally agreed standards The adoption of internationally agreed quality benchmarks for the delivery of higher education programs will allow comparability of peer institutions, potentially increasing the quality of educational delivery worldwide. However, without adherence to these internationally agreed standards, the aim of such an exercise is redundant. 10 Consistency over time and location The measures used to collect performance indicator data should also be consistent over time (i.e. not using one specific measure to give a more favourable impression) as well as consistent within departments and subjects. By utilising the same measures, 74

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

comparability across time and across peer departments becomes less problematic, allowing trends in performance quality to be more visible. 10 Efficiency in the use of resources Resources should be used in a productive and efficient manner. The allocation of financial aid, student/staff workload, social and psychological support services, as well as general infrastructure and the allocation of space should be done in a manner that best serves the interest of students and staff in the creation of an environment which promotes academic interest, creativity and support for all participants. The optimal combination of these variants is dependent upon the intended use of the data as data may be acceptable for one purpose, but inadequate for another (Rowe & Lievesley, 2002). As the majority of data resulting from performance indicator measurement is diverse in its intended purpose, the process of determining “fitness for purpose” is extremely important and requires extensive consultation. Accordingly, the use of performance indicators is suggested by the literature to depend on at least three necessary conditions (Cabrera, Colbeck & Terenzini, 2001). 1. Data is meaningful when defined by the user (i.e. the data should inform the user in a way that can improve decisions). 2. Performance indicators are most reliable and valid when used as a group (i.e. the information should provide a comprehensive picture of a strategic area). 3. Data should provide information concerning the input and processes associated with a particular outcome or function (i.e. enrolment management, learning, teaching, outreach community services etc). Taking account of the conditions under which performance indicators are best employed, it is suggested that a set of practical and sound indicators be used as a component of a guiding framework that can be applied by institutions as part of a sector wide approach. Such a framework would contribute to institutions being able to undertake targeted benchmarking at a sector wide and potentially global level.

Institutional concerns about national level performance indicators While the development of a common set of indicators for use in higher education institutions creates an opportunity for establishing standards and international benchmarking comparisons, they can have detrimental effects on those institutions performing below the average or agreed standards (Rowe & Lievesley, 2002; U.S Department of Education, 1998). Institutional performance differences are expected, due to varied missions and developmental stages which are largely outside of the institutions control (including the cultural and environmental origins of the institution, its purpose, funding and the history of the institution and country within which it resides). Institutions which might be identified as ‘lower quality’ higher education institutions are more likely to be recently established, attract lower socio economic status and admission standard students, and have limited resources (Spellings, 2006; U.S Department of Education, 1998; Yorke et al, 2005). As a result, such institutions have a limited ability to attract high quality teachers and researchers which perpetuates continued poor

75

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

performance on quality measures (Rowe & Lievesley, 2005; U.S Department of Education, 1998). If the evaluation of such institutions also results in reduced funding, this will perpetuate the lower standards of the institution by denying it the means and resources required for improvement (Meier & O’Toole, 2002; U.S Department of Education, 1998). The result is a number of low functioning institutions which cannot be fairly compared to historically long established universities with high reputations and levels of resources. These institutions typically perform well in quality assurance evaluations, and if attached to funding, continue to bolster their reputation, resources and the quality of the education they provide to their students (U.S. Department of Education, 1998). Rather than focusing on collecting information primarily at the national level, it would be more effective if information is gathered at the institutional level and focused on the progress made, with funding and rewards based on demonstrated progress. The use of an agreed framework of indicators at the institutional level, would account for the inherent institutional differences that exist. The measurement of institutional comparison against its previous performance provides an institution with direction and progress on multiple dimensions and levels of quality. Such a framework would provide a useful measure for institutions to compare their performance against previous performance, similar institutions and established standards. If funding was based on the measurement of institutional progress and achievement of high standards, ongoing enhancements in the quality of teaching and learning would be an outcome, rather than an ongoing decline from a lack of resources. The general conclusion is that the large majority of work completed on performance indicators in higher education has been undertaken with reference (explicit or implicit) to the expectations of external bodies which have an interest in performance and comparability between universities (Yorke, 1991). Relatively little emphasis has been given to aspects of intra-institutional performance, perhaps because the nearer one gets to student learning experience level, the more difficult it is to employ and measure indicators deemed valid, reliable and objective (Yorke, 1991). This report suggests that is that it is at this level where indicators can be most usefully employed, and are most likely to lead to an enhanced learning environment which benefits students.

Student retention and attrition as a national level indicator for institutional / educational quality: An illustrative example Student retention and attrition rates are used widely as national level indicators for the quality, effectiveness and efficiency of institutions and higher education systems. These measures are of prime interest to governments and their agents and are considered an important aspect of accountability to the public and whether the investment made in higher education is paying off in terms of graduating students for the labour market. However, there are growing concerns about the appropriateness of using retention and attrition measures to make conclusions about the educational quality of an institution (Cooper, 2002; Hayden, forthcoming; Yorke & Longden, 2004). Major areas of concern are measurement and methodology, and the appropriateness of using these as an indicator for institutional quality and performance.

76

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Measurement and methodology problems One of the main concerns about the validity of retention and attrition rates is the way they are measured and what is measured. Completion rates are a constant source of debate, much of it on shaky grounds since it is hard to pin down with any precision the number of students who actually disappear from higher education altogether. (Yorke & Longden, 2004, p.37) Calculating these rates as year-to-year or programme persistence does not account for the considerable number of students who have transferred to a different course or even a different institution. An example is provided by Hayden (forthcoming) when considering deferments by students which are taken as a gap, work or for personal reasons. Retention/attrition data rarely take into account incidents of students delaying studies for a period of time. Recent press reports indicate deferments have increased by as much as 100% in some Australian universities (Higher Education Supplement, 31/5/07). These students may continue studying at a later date, but statistically they are counted as dropouts (Lukic, Broadbent & MacLachlan, 2004) or early leavers (Conway, 2001). This measurement error can have an opposite effect where there may be double counting of course commencements when deferment incidences are not taken into account (Hayden, forthcoming). Many countries define and measure attrition and retention at intervals which makes international comparisons across regions problematic. Non-completion can have several meanings, from students who commence study but do not gain a university qualification, to students who depart with a lower qualification than originally enrolled, through to students who actually withdraw from studying and have therefore departed entirely. The way they are classified by their institution can also differ (Cooper, 2002; Conway, 2001; Yorke & Longden, 2004). Some universities classify students who do not re-enrol as ‘transferred’ to another course or institution. Since it is very costly to track students across the system, it cannot be said with certainty whether these students have transferred or deferred/withdrawn from their studies. Consequently, student retention can be overestimated or underestimated (Cooper, 2002; McInnis, Hartley, Polesel & Teese, 2000). The recent Australian initiative under Backing Australia’s Future which assigns students a unique identifier will enable more accurate tracking of students’ in higher education and may facilitate more accurate collection of retention and attrition data.

Problems with student retention as a quality indicator Several researchers have proposed models regarding student attrition/ retention in the past thirty years. For example, the organisational/ psychological model developed by Bean & Eaton (2000) and Tinto’s (1993) interactional model, which dominates the field (Braxton, Sullivan & Johnson, 1997, as cited in Cooper, 2002; Hayden, forthcoming).

77

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Cooper (2002) has developed a method for relating educational research to quality indicators which stresses that quality indicators are only valid if there is a continuous theoretical path that explains the relationship between an appropriate definition of quality and the indicator/data under review. Some reasons why the use of retention as a quality indicator are problematic include: 1. Retention statistics do not differentiate between student departures that could be attributed to the institution, and those that can be attributed to the students’ personal circumstances and other external events. Since retention rates are used in measures of institutional quality, it must be questioned whether external causes for withdrawal that lie beyond an institution’s control, are an appropriate measure of quality (Conway, 2001; Wyman, 1997; Yorke & Longden, 2004). 2. Students’ attendance patterns have considerably changed during the last years. A significant number of students dip in and out of higher education due to changes in their life circumstances. Yorke and Longden (2004) argue that the national agenda and institutional approach encouraging lifelong learning actually weakens retention as an indicator. What should be alarming to governments and institutions is long-term dropout, but this cannot be identified by annual statistics. This point is illustrated by Conway (2001) who found that 59% of early leavers subsequently attended another post-secondary institution. 3. The early departure of students is unevenly spread throughout the system. A number of institutions enrol a higher number of students from equity groups, minority cultural and disadvantaged backgrounds, mature age, or lower entrance scores. These students are significantly more likely to withdraw from study (Shah & Burke, 1996; Yorke & Longden, 2004). Where funding is contingent on attrition and retention rates which do not take into account the context and circumstances of the institutions and their student body, it raises the possibility that these institutions would try to maximise their retention rates by enrolling students who are more likely to persist in their studies. The likely long term impact of this would be a less diverse student body, greater competition for the targeted students, and greater differentiation in access to resources. 4. Since more and more students have to fund themselves with less support from the government, the rationale for using retention and completion as a performance indicator also gets weaker (Conway, 2001; Yorke & Longden, 2004). 5. The current recognition and articulation systems between post secondary institutions support a wide variety of transfer behaviour (Conway, 2001), across programs of studies, and between institutions and countries. 6. The claim that information on retention is of interest to prospective students is questioned. Yorke and Longden (2004) argue that this information says little about the quality of the student experience and may result in prospective students choosing the wrong institution for reasons relating to its reputation rather than the actual learning experiences it can provide.

78

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

7. A neglected factor has to do with efficiency. In times of shrinking resources, attrition seems to represent inefficiency, yet, not all student attrition can actually be equated with inefficiency. Many students transfer or interrupt their studies for good reasons which may be beneficial to their academic future (Conway, 2001; Hayden (forthcoming). In summary, there are significant issues to be considered if student retention is to be used as an indicator of educational quality. Retention data can and should be used at the institutional level as a warning sign to explore whether lack of resources, low quality teaching, poor advising, poor student support, or other institutional problems might be contributing to attrition rates.

Institution level performance indicators supported by evidence Many of the performance indicators on quality in teaching and learning currently in use in higher education institutions are not supported by empirical or theoretical evidence. Much of the literature is critical of many of the current indicators in use, particularly input and output indicators. It is likely that these indicators came into common use as a result of availability, rather than through an analysis of their appropriateness. Indicators at all levels, but particularly relevant for institutional level use as more valid in terms of measuring the quality and performance of institutions, teachers, staff and students are grouped under the following dimensions of teaching practice: 1. Institutional climate and systems 2. Diversity and inclusivity 3. Assessment 4. Engagement and learning community These four dimensions are interrelated, for example, both learning community and Institutional climate and systems refer to the notion of an institution-wide commitment to learning; a campus climate that values student learning, by creating an institution-wide ethos where learning is the focus of all academic and administrative work. This is considered a necessary condition to foster quality in student learning and where students feel a part of a learning community (Del Favero, 2002; Kuh, 1993b; Kuh, 1995; McDaniel, Felder, Gordon, Hrutka & Quinn, 2000; Pascarella & Terenzini, 1991; Shanahan, Findlay, Cowie et al., 1997). Table 3.2 outlines the four dimensions of teaching practice and illustrative learning and teaching indicators.

79

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Table 3.2: Learning and teaching indicators for four dimensions of teaching practice Institutional climate and systems Adoption of a studentcentred learning perspective Possession of desirable teacher characteristics Relevant and appropriate teaching experience, qualifications and development Use of current research findings in informing teaching and curriculum / course content

Diversity and inclusivity Valuing and accommodating student and staff diversity Provision of adequate support services Active recruitment and admissions Provision of transition and academic support Active staff recruitment Multiple pathways for reward and recognition of staff

Community engagement / partnership Funding model in support of teaching and learning

Assessment Assessment policies address issues of pedagogy Adopting an evidencebased approach to assessment policies Alignment between institutional policy for best practice and faculty/ departmental activities

Engagement and learning community Student engagement Fostering and facilitating (academic) learning communities Engaging and identifying with a learning community Staff engagement

Commitment to formative assessment Provision of specific, continuous and timely feedback Explicit learning outcomes Value of graduates (Further indicators of assessment are provided in Table 3.3)

Institutional climate and systems An institutional climate which is characterised by a commitment to the enhancement, transformation and innovation of learning, is more likely to develop when the main responsibility lies with the institution (Peterson & Augustine, 2000). Institutional autonomy and an emphasis on local conditions are more likely to encourage educators to employ innovative practices which are aimed at improvement and quality learning. It is also likely to lead to more collaboration, cooperation and communication across the institutions, which is crucial in terms of disseminating and sharing best practices. The importance of institutional climate and systems is a key dimension of quality teaching and learning, referring to the evaluation of institution, staff and student levels of satisfaction, engagement, and experience. The measurement of student experience and 80

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

satisfaction is currently a common indicator of quality teaching and learning, however, the data only contributes a limited amount of information about the institution. There are other, potentially more significant, indicators of institutional climate.

Student-centred learning approach Evidence of a student-centred learning approach is perhaps the most strongly supported indicator of teaching and learning quality (Gibbs & Coffey, 2004; Hoyt & Lee, 2002; Kuh, 1993b, 1995; McDaniel, Felder, Gordon, Hrutka & Quinn, 2000; Pascarella & Terenzini, 1991; Smart, Feldman & Ethington, 2006 Tinto, 1997, 1998). The adoption of a studentcentred learning approach may be evidenced at the institution, department/program and individual teacher levels. A student centred learning approach should be evident in such things as the policies and practices of enrolment, assessment, progression, and provision of services and support for students and providing the appropriate development opportunities, resources and support for teachers. It typically involves identifying evidence of providing student choice that accounts for student diversity, encouraging active student engagement, encouraging collaborative engagement on educational issues (American Association for Higher Education, 1998; Coates, 2006a; Kuh, Pace & Vesper, 1997; NSSE, 2006; Tam, 2007; Umbach & Wawrzynski, 2005). It is characteristically defined by setting high but achievable expectations (Belcheir, 2001; Hearn, 2006; NSSE, 2000; Schilling & Schilling, 1999; Umbach & Wawrzynski, 2005), encouraging a deep or mastery student learning approach and student experimentation in the learning process, as well as accounting for student needs rather than adopting a teacher-centred, passive learning approach.

Valuing teaching and teachers Studies have found that a large number of university staff do not believe quality teaching is rewarded by institutions. This is accompanied by low levels of satisfaction (Kember et al, 2002; Ramsden & Martin, 1996). If teaching is valued and appropriately rewarded, higher levels of staff satisfaction with subsequent motivation for teachers to enhance the quality of their teaching would be expected. Increased satisfaction as a result of institutional recognition of teaching contributions is likely to contribute to enhanced teaching behaviours and more satisfied students, resulting in a positive institutional climate. The importance of identifying the quality of staff experience and satisfaction is an important consideration in the quality of teaching and learning. Understanding the components of institutional climate (including the measurement of staff engagement and satisfaction and considering multiple levels of student engagement and satisfaction, institutional effectiveness, organisation, management) are aspects that have been largely neglected to date. The measurement of staff experience and satisfaction has received extensive support from the literature as a highly useful indicator, but to date, has not been widely employed in higher education institutions in the measurement of quality teaching and learning. While student experience and satisfaction are the most commonly used measures, the experience of teaching staff as well as their satisfaction as educators in their current roles is not presently evaluated. Given the experience and satisfaction of staff is highly influential upon teaching behaviours, which in turn effects student learning outcomes, it is

81

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

questioned why a survey of staff experience and satisfaction is not more widely used in institutions. The results of such a staff survey would highlight areas of satisfaction, concerns and frustration, as well as identifying the practices and systems teachers value. This information could then be used to inform enhancement of those practices and systems. An important feature of the learning community dimension is the nature of academic staff development, as this can have positive effects on student learning as well as learning approaches (Braxton, 2006; Gibbs & Coffey, 2004; Hounsell & Entwistle, 2005; Ho, Watkins & Kelly, 2001; MacDonald, 2001). When the academic development programs are based on theoretical models of student learning, they are considered likely to enhance learning (Prebble, Hargraves, Leach et al., 2005). Institutions that support excellence in teaching directed at the improvement of student learning, and give weight to teaching behaviours that contributes to learning in their reward structure, are more likely to enhance student learning (Barr, 1995; Braxton, 2006; Hounsell & Entwistle, 2005). Performance review criteria of teaching staff that clearly conveys expectations of evidence of quality student learning will encourage staff to focus on students (Braxton, 2006). A learning university is characterised as being open to, and continuously searching for, structures and methods that enhance learning; this can involve crossing traditional boundaries between departments and even institutions (Barr, 1995; Kuh, Kienzie, Schuh & Whitt, 2005). With regard to the individual student, there should be clear expectations and strategies for achieving high quality learning (Donald, 2000) and also clear statements of functional goals for student learning (Association of American Colleges and Universities, 2006). The way a particular university translates and implements a learner- and learning-centred environment is highly contextual. There is no single set of indictors that can predict high quality learning. However, institutional commitment to student learning (Tinto & Pusser, 2006) is visible in strategies, faculty development, curricula, pedagogies and programs related to learning. Based on these considerations, an existing emphasis on learning might be best measured at the departmental and institutional levels. The Professional Standards Framework for teaching and supporting learning in higher education (HEA, 2006) has been developed for institutions to apply to their professional development programs and activities in order to demonstrate that professional standards for teaching and supporting learning are being met. The following “good practice” indicators provide the basis of measurement for the enhancement of institutional climate at the most appropriate levels- the institutional, department/program and individual teacher levels. They represent some of the constituents of a quality institutional climate in the higher education sector.

Desirable teacher characteristics The following desirable teacher characteristics have been demonstrated to increase student learning outcomes and achievement.

82

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning



Teacher clarity correlates significantly with student achievement (Cabrera, Colbeck & Terenzini, 2001; Feldman, 1976; 1989; Land, 1979; Rosenshine & Furst, 1973).



Teacher organisation exerts a positive and significant effect on problem-solving skills and occupational awareness (Cabrera, Colbeck & Terenzini, 2001; Feldman, 1976)



Teachers who motivate students and stimulate interest have a significant impact on increasing student learning outcomes. Little learning will occur in the absence of motivation (Feldman, 1976; Hoyt & Lee, 2002; Schacter & Thum, 2004).



Teachers who are enthusiastic towards teaching (Feldman, 1986)



Teachers who possesses a deep knowledge base (Feldman, 1976; Greenwald, Hedges & Laine, 1996; Hanusheck, 1989, 1997; Shulman, 1987)



Teachers who effectively communicate with students (i.e. at an appropriate level and interpersonal manner in the delivery of educational material) (Young & Shaw, 1999)



Teachers who demonstrate respect for students (Schacter & Thum, 2004; Young & Shaw, 1999)

These characteristics can be represented in student experience surveys, in performance review and promotion criteria, and in professional development programs. This will provide evidence from a range of sources and signal that these are important characteristics and valued by the institution.

Relevant and appropriate teaching experience, qualifications and development The years of experience in teaching and specific teaching qualifications are positively and significantly related to student achievement according to more than 400 empirical studies (Greenwald, Hedges & Laine, 1996; Hanushek 1989, 1997; Harvey, Green & Burrows, 1993). There is also evidence that participation and engagement in professional development activities is related to the quality of student learning. Provision of opportunities for professional learning and development, and obtaining relevant teaching qualifications, and establishing requirements that professional development and qualifications are undertaken are indicators of an institutional climate the recognises the importance of the preparation of staff for teaching. Engaging in, and contributing to, the scholarship of teaching and learning is a way to demonstrate commitment and contribution to quality teaching and learning. The UK Professional Standards Framework for teaching and supporting learning, and the Professional Recognition Scheme established by the Higher Education Academy is informed by this research.

Provision of support services The provision of adequate support services to both staff and students has been shown to affect the quality of teaching as well as the outcomes of student learning. The importance of a range of support services is strongly theoretically and empirically supported (Beasley, 1997; Blanc & Martin, 1994; Blanc, DeBuhr & Martin, 1983; Couchman, 1997; Etter, Burmeister & Elder, 2001; Gibbs & Coffey, 2004; Hodges, Dochen & Joy, 2001; 83

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Harvey & Green, 1993; House & Kuchynka, 1997; Lietz, 1996; Martinez & Munday, 1998; McInnis, James & Hartley, 2000; Peat, Dalziel & Grant, 2001; Prebble et al, 2004; Smart, Feldman & Ethington, 2006; Tinto & Pusser, 2006; Treisman, 1993; Williford, Chapman & Kahrig, 2000-2001; Yorke, 1998; Zeegers, 1994; Zeegers & Martin, 2001). These may include: •

student financial support



financial scholarships for under-represented/disadvantaged groups of students



student educational/academic support



student social support, transition programs



support specifically for minority students



guidance/counselling services, student organisations



staff development programs



the provision of advice and support for the interpretation of feedback/evaluation data

Providing support services that assist students in overcoming learning difficulties has the potential to result in better student performance (Lietz, 1996). However, in common with many indicators, it is not the amount of money spent that leads to more engagement and learning, but how this money is allocated (Gansemer-Topf, Saunders, Schuh & Shelley, 2004). If money is spent on services that only a few students access (or know about), it is unlikely to have an impact. The same is true for the kind of services on offer. While programs such as Supplemental Instruction or Peer Assisted Study Sessions are directly related to learning, cafeterias or sport clubs may enhance the overall satisfaction with the university. Student support services are more appropriately considered an institutional indicator for quality learning than a national one, though benchmarking can take place across cognate institutions.

Use of current research findings to inform teaching and curriculum/course content This indicator refers to the extent to which institutional leaders and teaching staff are familiar with research findings on teaching and learning and the ways they apply this to the curriculum and practical applications of assessment methods (Hertz, 2007). Specifically, it refers to the extent to which academic staff utilise research on the relationship between teaching strategies and student learning in the educational process. Research has demonstrated that quality teachers are those who integrate established theories of learning in their practice of education in order to understand how students develop and learn. They do this through the process of evaluation and reflection concerning the effects of curriculum design, personal teaching styles and approaches to assessment on student learning. This subsequently allows educators to cater for individual needs, improving student learning outcomes and experience (Barr, 1995; Kuh,

84

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Kienzie, Schuh & Whitt, 2005; NBPTS, 2000; Nine Principles Guiding Teaching and Learning in the University of Melbourne, 2002). In addition to the importance of the scholarship of teaching in the design of the curriculum and application in teaching practice, there is a growing body of evidence that the planned involvement of students in meaningful research activities at the undergraduate level has positive benefits for students’ learning and engagement (Jenkins et al., 2007; Pascarella & Terenzini, 1991; 2005). Evidence of this indicator might be found in appointment and promotion criteria for teaching staff, and in rationales for policies and institutional practices such as on-line teaching, assessment, work-based learning provision, teaching and learning strategies, curriculum models, curriculum review policies, planned teaching and learning strategies, processes and practices, and transition programs.

Community engagement/partnership This indicator refers to institutional activities to involve and engage the community, collaborate with local and overseas business and industry, as well as provide outreach programs, community workshops and conferences, regional libraries and the like. Collaborating with business and industry creates opportunities and valuable practical experience for students completing higher education degrees, developing and reinforcing specialised student skills, and preparing them for success in the workplace. Although this is not a direct measure of teaching and learning quality, links to business and industry enhance institutional performance by enhancing student learning outcomes. This indicator might be more relevant to institutions that emphasise community engagement or work based learning as part of its mission, or programs of study that have a professional practice requirement.

Diversity and inclusivity Diversity in higher education relates to ethnic, cultural and socioeconomic diversity, as well as diversity regarding students’ and teachers’ abilities, talents and learning approaches. Diversity is an indicator that is theoretically and empirically supported by the research literature and is frequently employed as a measure of quality teaching. Diversity is described in this section separately in terms of ‘diversity interaction’ and ‘enrolment rates’. A significant body of research is concerned with the impact of diversity on student learning and there is general consensus that both interacting with students from different backgrounds as well as the value a university places on diversity, has indirect, yet positive effects on the quality of student learning and on a number of desired outcomes (Antonio, 2001; Gurin, 1999; Gurin & Nagda, 2006; Hu & Kuh, 2003b; Hurtado, Milem, Clayton-Pedersen & Allen, 1998, 1999; Inoue, 2005; Nelson Laird, 2005; Pascarella, Palmer, Moye & Pierson, 2001; Terenzini, Cabrera, Colbeck, Bjorklund & Parente, 2001; Umbach & Kuh, 2006).

85

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

At the institutional level, this would be evidenced by the provision of a range of services and support that meet the needs of the students and might include: provision of challenge and extension programs for the academically gifted, provision of work or field based learning experiences; learning assistance courses, student bridging programs, bilingual courses, classes outside the regular 9am-5pm times, provision of tuition and student services and provision of on-line and varied learning resources. It would also be evident in resources and support provided for staff though professional development programs on cultural sensitively and communication, client-focussed services, and providing multiple pathways for students. Some of these services and support might more appropriately be provided at the faculty level, rather than the institutional level. A review of institutional policies and practices through a diversity lens is therefore important. At the department/program level, indicators would include evidence of flexible curriculum design, the extent to which educators vary appropriate presentation and assessment methods to cater for and are inclusive of individual origins, perspectives, learning needs and differing levels of student development and abilities, provision of relevant workbased/research based experiences (Harvey & Green, 1993; Hounsell & Entwistle, 2005; Lizzio et al., 2002; Rainey & Kolb, 1995; Schacter & Thum, 2004). At the individual level, it includes evidence that teachers account for student diversity by emphasising individual differences in the assessment process (including presenting classes at an appropriate pace, setting assess ents at an appropriate level); utilise integrated teaching methods; adapt teaching materials according to context and student diversity and inclusivity principles, and provide an optimal classroom environment which embraces diversity, inclusivity and equity. (Schacter & Thum, 2004;Harvey & Green, 1993; Young & Shaw, 1999) During the last decade, the higher education sector and its students have become more internationalised and diverse (Northedge, 2003). Higher education statistics show that Australia is a major destination for international students.It is important to understand the underlying educational rationale for cultural diversity: it provides interactional opportunities for all students – local students as well as students from other cultural backgrounds (Chang & Astin, 1997).

Diversity interactions / valuing multiculturalism Research has shown that learning and collaborating with peers from different cultures and backgrounds can positively affect self-reported gains in learning (Hu & Kuh, 2003b) and that inter-group dialogue involving exploration of commonalities and differences (Gurin & Nagda, 2006) is likely to foster learning among diverse students. Inoue (2005) and Nelson Laird (2005) have found that diversity experiences are very likely to enhance critical thinking and cultural sensitivity. The effects of interactions between diverse peers will, however, depend on the nature and quality of these interactions. Positive effects on learning outcomes can occur when the following conditions are met (Hurtado, Dey, Gurin & Gurin (2003): • 86

the groups possess equal status

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning



the group has common goals and cooperates



group equality is supported by institutional leaders



group members have a variety of opportunities to get to know each other.

A longitudinal study at the Higher Education Research Institute at the University of California (Chang & Astin, 1997) found that a culturally diverse student population has beneficial effects on racial and cultural understanding and academic and personal development. This confirms the finding from Astin’s (1993b) study which confirmed that a multicultural university environment had positive impacts on college GPA (Grade Point Average) and student retention. In order to foster diversity interactions on campus, it is argued that relevant policies should be in place for increased minority enrolment (Chang, 2002; Gurin, 1999; Hurtado, Dey, Gurin & Gurin, 2003; Smith, Gerbig, Figueroa et al., 1997). A diverse student body is seen as a necessary condition for diversity, yet, not a sufficient one. The quality of interactions determines the quality of these interactions, hence, learning (Pike & Kuh, 2006). Research on service learning and diversity research led to the recommendation that direct strategies such as integration of diversity into coursework should be practiced (Suyemotu & Nien-chu Kiang, 2003). This reflects a growing consensus among international practitioners that intercultural issues should be a mandatory part of the curriculum as “good academic education has to develop a broader perspective that is composed of a coherent combination of ‘intercultural competence, critical thinking and comparative thinking” (Yershova, DeJaegere & Mestenhauser, 2000, cited in Otten, 2003, p.18-19). It is also regarded as most useful to enhance teaching staff’s capacity to use teaching methods that are culturally sensitive, foster respect for different cultures and address a variety of learning styles (Hurtado, 1996).

Enrolment rates/valuing a diverse student body Diversity in terms of the large variety of abilities, knowledge and learning skills students bring to university, is another important factor to consider. In the face of an increasingly diverse student body characterised by different learning patterns, abilities and interests, universities may consider a shift from an ‘integration approach’ (adapting and integrating students into a particular academic culture) towards the more learner-centred ‘adaptation approach’ (institutions adapt their administrative and academic cultures to meet the diverse learning patterns, abilities and interests of a diverse student body) (Zepke, Leach, Prebble et al., 2005). The use of enrolment rates as a quality performance indicator is generally supported by the theoretically and empirically literature. Enrolment data is collected at the institutional level. The information on the composition of the student body should show evidence of impact at department and program level operations in the design of the curriculum, teaching methods and strategies, and assessment so that it demonstrably meets the needs of students with different backgrounds, learning styles, and entering abilities (see Accommodating for student diversity). Higher education institutions typically take account of the number of students entering programs, but little investigation or interpretation that identify student characteristics that 87

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

could inform them of particular programs or educational needs that might be usefully provided. Institutions may benefit from collecting data from enrolling students in order to gain a more complete perspective of the diversity of the student population within the institution as a whole, as well as providing this information to different levels, particularly at the department and program levels. The following data sources have been found to be related to student achievement: •

Admission standards (eg ENTER score)



% New admissions



% Full-time students



Social origins of students



Gender of students



Social Economic Status of students



Cultural affiliation of students



Race/ethnicity of students)

Catering for a diversity of learning needs and providing students the opportunity to achieve stated learning outcomes in a variety of ways may enhance the quality of student learning (Lizzio, Wilson & Simons, 2002; McDaniel, Felder, Gordon et al., 2000; Rainey & Kolb, 1995; Vermunt & Vermetten, 2004). However, caution must be applied to any interpretation as the enrolment rates do not take account of differential processes of enrolment, different institutional missions, target student populations and catchment areas. Neither do they provide evidence of the quality of teaching and learning.

Assessment Assessment is both an indicator of learning and of quality of teaching, systems and practices. The most direct measures of student learning are through the assessment tasks while students are studying in their enrolled program of study. Research has repeatedly shown that assessment does not merely serve to inform students about their achievements, but is a necessary condition for quality learning. In other words: assessment drives learning (Greer, 2001; Harris & James, 2006).

Indicators of learning quality There has been a steady increase in interest by governments and their agencies and employer bodies for more direct indicators of learning quality in higher education institutions. The learning indicators currently in use are criticised as indirect or proxy measures of learning, for example, student progress rates and grade point averages. There is a strong impetus from governments, funding bodies and testing organisations to seek direct measures on the quality of student learning through the use of common tests. For example, the Graduate Skills Assessment (GSA) test in Australia and Collegiate Learning Assessment (CLA) in USA. Taking this a step further, the US Department of Education has recently allocated significant funding to identify and develop measures to

88

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

assess and report student achievement and institutional performance. There are similar moves underway in the European Higher Education Area to develop a test for measuring learning, drawing from the experience of the PISA study (see Section 2 for details of these). Calls for generic and standardised measures of learning (whether broadly generic or generic for a discipline) are not embraced by universities and their students for they do not recognise the learning that has taken place in the program of study. The quality of learning in the discipline that takes place through completing the qualification is what universities, students and their parents’ value. The idea that this might be captured by a single test, or a ‘valued added’ pre and post university test score that has no relationship to the learning that has been undertaken, is considered problematic by those engaged in higher education. While the concept of such a test is appealing as a way to assert that the institutions are providing a quality learning experience, and evidence of adding value to their students, tests such as these are still proxy measures of learning quality and are no more informative than the proxy measures currently in use. Tests can be of value to institutions by providing information about areas to direct services and support and ask further questions of preparation in programs of study. However, their value across institutions is progressively diminished, particularly when they involve different types of institutions with different programs, goals, resource base, reputation and geographical location. While accounting for these differences by utilising statistical models can make the information more meaningful at the national level, it renders the resulting scores meaningless to the institution, particularly at the program level. When funding is attached to these test scores, they become high stakes, and may eventually lead to institutional and teacher behaviours directed to maximizing the scores. This could be at expense of the quality of education experienced by the students, as has been claimed is occurring in the schools sector as a result of standardised national testing (Hurst, 2007).

Assessment indicators The literature about good practice in assessment is extensive and well developed (Biggs, 1999; CSHE, 2002; Gibbs & Simpson, 2004-05; Yorke, 2003) and many universities have adopted a number of effective approaches to assessment. There exists a great variety of methods, ideally aligned with specific learning goals, student learning approaches and the particular subject. This diversity is desirable and essential, yet it is not an end in itself. It should also be used to encourage institutional improvement (Mowl, McDowell & Brown, 1996; Peterson & Augustine, 2000). Therefore, it is on the design, delivery and administration, provision of feedback, moderation, and review of assessment where universities should be directing their attention. It is here that governments and their agencies could have the greatest impact on student learning. Indicators of quality assessment include the development and implementation of systems and reviews. External and internal reviewers can have different perspectives on the role of reviewing and documenting assessment practices and processes. External reviewers are most commonly interested in the policies and processes of assessment, criteria and standards of grading, comparability and compliance. Internal reviewers are more

89

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

commonly interested in pursuing feedback on the effectiveness of assessment practices and processes in supporting learning and improving grading practices (Hunt, Larson & Greene, 2002). External reviews have the advantage of being independent and unbiased (as long as institutional differences are taken into account). However, where policies and practices are externally imposed that do not mesh with internal improvement activities, resistance and resentment among institutional stakeholders is likely to result (Christie & Stehlik, 2002; Harvey & Newton, 2004; Peterson & Augustine, 2000; Warde, 1996, cited in Harvey & Newton, 2004). Therefore, quality reviews of assessment practices should follow the notion of the “enhancement-led” approach (Harvey & Newton, 2004). The quality of assessment is visible from the following: 1. The systems of review used to evaluate the quality of assessment at the program and subject level; this includes the assessment of graduate attributes. A mix of internal and external systems of review is desirable. 2. The review of the actual assessment practice, for example marking standards and grading practices, using internal and external peer review. 3. The systematic review of assessment tasks, i.e. the range and type of tasks as well as the quality of the tasks to achieve their purpose. 4. The feedback given to students and systems in place to review the value of the feedback, in terms of timeliness, quality and whether the feedback leads to improvement. Assessment should also provide feedback on the program of study and teaching staff. The extent to which quality reviews regarding assessment and feedback are in use as well as the quality of such reviews (i.e. the extent to which they are based on research findings regarding learning and development, and whether they are aimed at promoting quality learning and improvement), can serve as a powerful indicator of quality student learning.

“Good Practice” indicators for quality assessment Although there are a number of assessment indicators currently in widespread use, many are not supported by the literature. “Good practice” indicators refer to those which are endorsed by the empirical literature. An example of a good input assessment indicator is having explicit learning outcomes at the department/program and individual teacher levels (explicit communication of student learning objectives). These are evidenced by educational material and content being consistently connected to prior learning and life experiences where feasible, integrated course content across disciplines and programs and providing opportunities for increasingly sophisticated levels of understanding to be developed. By ensuring that knowledge and understanding is cohesively and consistently developed, students are more likely to develop a more complete understanding of a concept, while at the same time acquire a sound grounding in the discipline (Gaither, Nedwek & Neal, 1994; Schacter & Thum, 2004).

90

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Another assessment indicator of “good practice” strongly supported by the empirical literature for use at the department/program and individual teacher level is the provision of specific, continuous and timely feedback on the quality of student learning (Black & William, 1998; Gibbs &Simpson, 2004; Hattie & Jaeger, 1998). The provision of feedback has been empirically supported as contributing to improved student learning achievement and outcomes (Cabrera, Colbeck & Terenzini, 2001; Kulic & Kulic, 1979; Robles, 1999; Schacter & Thum, 2004; Williams, 2002). Specific reference to the provision of feedback to students as an indicator of quality teaching and learning is referred to in only a small number of institutional policy documents. Perhaps the most important assessment indicator as an institutional level outcome measure is the value of graduates. This variable is of great significance, given one of the purposes of higher education is to equip students for further study or the workforce and is theoretically supported by the literature. Typically, indicators such as student learning outcomes, student skills/knowledge, student preparedness, student engagement, graduation rates, attrition/retention rates, and graduate employment/income rates are evaluated separately. However, it is argued that combining the information will provide a more complete picture of the overall value of graduates to society and the workforce. This information can be used at the institution and department/program levels when reviewing the provision of support services, curriculum development and review and community engagement. The assessment indicators in Table 3.3 are supported by empirical research as contributing to student learning and quality enhancement. Table 3.3: Illustrative performance indicators for quality assessment Level Institutional

Faculty/ Department

91

Input

Process

Outcome

Assessment policies address issues of pedagogy (not merely procedural matters) Adopting an evidencebased approach to assessment policies Alignment between institutional policy for best practice and faculty/ departmental activities Commitment to formative assessment

Collecting longitudinal assessment data Using quantitative and qualitative data Providing professional development regarding the development, marking and review of assessment for faculty, staff and administrators Regular reviews of assessment practices in terms of student learning (using a mix of internal and external reviewers) Conducting assessment for internal improvement

Institutional and educational improvement Transformation of the student learning experience Increasing the ‘assessment literacy’ across the institution

Policies in place that guide individual’s assessment practices Assessment is built into departmental planning and review

Involving faculty in developing assessment tools Aligning assessment practices with the goals of specific subjects Assessment tasks are

Transformation of the student learning experience

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Assessment practice is reviewed by discipline Commitment to formative assessment or “learning-oriented assessment”

Teacher

Basing assessment on what is known about student learning Assessors have substantial grounding in the theory and practice of assessment

developed in a manner that helps to systematically assess the achievement of graduate attributes Integrating peer review of marking standards Regular exploration of innovative assessment practices based on improved knowledge Analysing and distributing results of student assessment to staff Engaging students in the assessment process Employing a variety of assessment methods Embedding desired graduate attributes in assessment tasks Using explanatory and diagnostic feedback

Student learning Lifelong learning Transformative student learning Student satisfaction/ engagement

Engagement and learning community The academic environment is the primary means by which students further their learning, abilities and interests - making it a central dimension to student success (Smart, Feldman & Ethington, 2000). Student learning communities can be considered from two perspectives. Firstly, an institutional perspective, i.e. the extent to which students feel they belong to and are engaged as a community of learners. Secondly, the provision of distinct and varied learning communities within the institution in which students and staff can engage.

Learning communities Empirical and theoretical research has shown that participating in student learning communities (both formal and informal / academic and non-academic) can improve learning outcomes (Thomas, 2002), grades and academic performance (Astin, 1996; Berger, 2002; Berger, 2002; Carini, Kuh & Klein, 2006; Gordon, Young & Kalianov, 2001; Hounsell & Entwistle, 2005; Kuh, 2003; Minkler, 2002; Pace, 1979, 1995; Rau & Durand, 2000; Shulman, 2002; Tam, 2007; Tinto, 1997; Tinto & Russo, 1993; Zhao & Kuh, 2004). Even in large universities with hundreds of students in a single course, it is possible to develop formal, discrete learning communities within these larger classes as an effective way to enhance academic achievement (Baker & Pomerantz, 2000-2001; Chalmers et al, 2003; Mangold, Bean, Adams et al., 2002-2003). However, simply forming students into classes does not necessarily lead to the formation of a learning community. Active promotion, support and engagement is needed from both staff and students (Jenkins, Healey & Zetter, 2007). The strength of learning communities is in the integrated approach to education. Integrated educational experiences more closely parallel the way people learn and are 92

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

more relevant to real world events. Students have the opportunity to see topics from multiple, sometimes even conflicting, perspectives, allowing for more critical thinking. (Rasmussen & Skinner¸ 1999, cited in University of Wisconsin, Stephens Point). Learning communities embody the notion of ‘collaborative’ and ‘active learning’, which has been shown to improve the quality of student learning. Viewed from an institutional/department level, there are particular structures and practices that can be indicative of learning communities. Informal learning communities are more likely to develop when an institution provides suitable facilities, services, support and events that help students to become assimilated and involved into an institution’s social milieu. For example, the formation of informal study groups and the provision of social events for first year students are indicative of a commitment to establish and foster a learning community. Research has shown that these are indirectly related to gains in student learning (Pike, Kuh & Gonyea, 2003). Creating formal student learning communities is another strategy to improve the quality of learning. This is usually achieved by coordinating a number of courses into a single program or combining several subjects to work on a common theme. This interdisciplinary approach furthers learning through discussion and examination of a greater variety of topics and is characterised by a greater diversity of students (Maricopa Community College District, 2002). Other possible types of learning communities could involve special seminars to enhance study skills and adaptation to university study. Institutions which provide and foster professional development for teaching staff on the importance of learning communities are more likely to develop communities that have an impact on student learning (Washington Center for Improving the Quality of Undergraduate Education, n.d.). Similarly, an institutional climate that values openness, promotes knowledge sharing and distribution, collaboration and dialogue, is almost a prerequisite for the development of fruitful learning communities (Kilpatrick, Barret & Jones, 2003). Student learning communities are closely related with the quality of student engagement and are complementary and interconnected.

Engagement The importance of students’ commitment and engagement with their own education has been emphasised by Pace (1979, 1995) who shaped the notion of “quality of effort”, Chickering and Gamson’s (1987) “good practices” (especially the use of active learning techniques), and Astin (1979, 1985, 1993) and his “involvement principle”. As such, engagement is considered to be one of the better indicators and predictors of learning and development, for the more a student studies, practices and puts effort into a subject, the more he or she tends to learn about it (Carini, Kuh & Klein, 2006). By engaging in educational purposeful activities at university, students develop skills and habits for continuous, autonomous learning – an important factor in life and career after

93

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

college. This means that engagement is not merely a proxy for learning, it can also be an end in itself and a “fundamental purpose for education” (Shulman, 2002, p. 40).

Measuring learning community and engagement A widely used tool to understand engagement in the US is the National Survey of College Student Engagement (NSSE). It is used to assess ongoing progress and to uncover ways that help improve student learning and institutional performance. The ACER is currently developing a version of the NSSE – the Australasian University Survey of Student Engagement (AUSSE) for use within institutions (See section 2.2 for a more detailed description of measures of student engagement). There are a number of survey instruments which identify aspects of student engagement: •

Several scales in the Course Experience Questionnaire (CEQ)



The College Student Report, Student Engagement Questionnaire (SEQ)



Several items from the First Year Experience Questionnaire (FYEQ)



Selected scales in the Postgraduate Research Experience Questionnaire (PREQ)



College Student Experience Questionnaire (CSEQ)



Scales in the College Student Survey (CSS).

A number of survey instruments have scales and items on learning community. For example, there is a learning community scale available in the Course Experience Questionnaire (CEQ), and is included in a number of student engagements surveys. Issues that need to be taken into account when considering using student engagement data is that it relates to the amount of time spent engaged with learning tasks. This can be assessed from the student’s perspective and from the institutional perspective, but both need to be interpreted with caution. From the student perspective, life situations are more complex than ever before, with a greater number of mature aged students entering university, an increased number of students studying at a distance, studying part-time, working while studying, and caring for dependent children or parents (Kuh, 2003). Each university and study programs within each university will have different proportions of students in these situations so taking a single measure and applying it across universities or across a university could present a distorted picture. It would be more appropriate to measure engagement at multiple levels within the institution in order to be able to account for local, institution- and studentspecific circumstances. From the institutional perspective, caution is also necessary when interpreting the amount of time spent in student-teacher interaction. More interaction may not be better (Kuh, 2003). It has been found to be more important to have substantive contact between teachers and students than to have casual contact. While survey instruments can have value at the national or sector level, the real value is at the institutional level where it can be used to inform institutional improvement. Since

94

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

every university should be viewed in the context of its unique, local environment, history and students, institutions need to have access to own data about engagement and community.

Staff engagement and learning community The extent that staff feel engaged with their teaching and research and the extent that they are connected to a learning community is an important aspect to consider in this dimension. Staff are a vital component in building positive experiences for student engagement and community. Many of the same indicators that apply to students apply to staff.

Benchmarking in institutions The term benchmarking is used on such a regular basis that it is generally assumed that its meaning is universally shared and agreed. This is not always the case as there are numerous definitions of benchmarking, as well as multiple categorisations of benchmarking types. Benchmark or “bench/mark” originally referred to a mark on a permanent object indicating elevation as well as serving as a physical reference in topographic surveys and tidal observations. This concept of a reference point persists today, and the term “benchmark” is often taken to mean something that serves as a standard by which others may be measured or judged. These measurements are often identified as performance indicators. The term benchmarking has evolved from being understood as a simple measurement technique to a strategic approach that considers not just the result achieved, but also the process by which it was achieved. Benchmarking is therefore defined as a determinant through which good or best practice can be identified and adopted; the formal and structured process of seeking those practices which lead to superior or excellent performance, the observation and exchange of information about those practices, and the adaptation and implementation of those practices into one’s own institution (Meade, 1994). While benchmarking may be considered to be an important self-improvement tool which allows inter- as well as intra-institutional comparisons, it is critical that the processes of benchmarking be applied within the context of a university’s mission, goals and histories (USP, 2004; Fieldman, 1997), for if contextual influences are not accounted for, comparisons between inherently diverse higher education institutions becomes markedly problematic. Specifically, benchmarking activities should only be considered where the institutions to be compared are similar in terms of mission statements, institutional procedures, activities, student populations, geographical location, economic and political context, and access to resources to name but a few of the dimensions which need to be accounted for when considering benchmarking activities. For this reason benchmarking across higher education institutions is inherently difficult (Guthrie, 2004) and should be approached cautiously. A benchmarking manual developed for use in Australian higher education institutions (McKinnon et al, 1999) is described in Section 1, and attests to some of the difficulties that can occur.

95

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

A review of current practice in teaching and learning in Australian universities As part of the Carrick Institute project, two studies are currently in progress to examine the use of performance indicators of teaching and learning, and use of student surveys in institutions. Thirty-four institutions have indicated their willingness to be involved in these two studies. The outcomes of these two studies will provide a basis for the internal review of institutional practices and provide a useful starting point for discussions on benchmarking of teaching and learning indicators. The focus of the first study is to review teaching and learning indicators and outcomes in use at the institutional (including whole of institutional, faculty and program level) and individual teacher levels on the quality of teaching and learning in Australian universities. This is primarily a survey of current practice in institutions with information sourced from policies, procedures, appointment and promotion criteria, performance management processes and outcomes related to teacher quality. The primary areas for review in this study include: •

Institutional strategic and operational plans



Institutional/ teaching quality indicators and assessment such as quality enhancement/ assurance processes and procedures in relation to T & L



Curriculum review processes and use of that information



Student perception of teaching and learning and ways in which the information is collected and used



Institutional systems for allocating funding on teaching quality internally using the operating budget, and also for allocating LTPF funding. Also, reporting and evaluation strategies on the use and impact of the T & L funding.

The study is designed to investigate the policies, practices and quality assurance systems related to teaching and learning in use at each Australian institution. A quality systems approach has been adopted in reviewing the practices, with a focus on dimensions which are relevant to demonstrating quality in teaching and learning. The specific areas were chosen because they provide a comprehensive overview of current practice regarding recognising and rewarding quality teaching and learning. Specific areas reviewed include: •

Major Goals/Vision



Teaching and Learning Policies and Plans



Teaching and Learning Indicators (as measured & used at each institution)



Graduate Attribute Statement (and how this is embedded and assessed in courses)



Assessment and Feedback Policies (not in great detail as there is another Carrick project looking in detail at Assessment Policy)

96

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning



Student Experience (encompasses the entire student experience, including the physical environment, resources available, and policies such as promotion of engagement and internationalism)



Appointment and Promotion Criteria



Professional Development (foundations for teaching programs through to academic leadership programs for Heads of Departments)



Organisational Unit Review (Discipline, Division, Faculty, School, Centre)



Curriculum Review (units, streams, programs)



Review of Academic Staff (at all levels; criteria, frequency, implications)



Recognition (Awards, Grants, Citations)



Funding (LTPF, Allocation to Teaching and Learning)

The intent is to identify systems and practices at different levels within the institution. For example, in summarising processes and practices related to academic staff, a review is undertaken of practices and policies that apply from the time of appointment, through probation, review, professional development, and promotion or alternate outcomes. Continuing with this example, additional aspects to consider include the support and resources available for both staff and student, such as availability of first year mentoring and support services for students, and professional development opportunities for staff), and awards and other forms of recognition for teachers. As the teaching and learning environment is a complex interaction of policies, practices and relationships, it is important to capture as many dimensions of this experience to inform future development. This study examines university practices at a number of levels: individual teacher, departmental and program level measures and outcomes, as well as those in use at the institution level. The AVCC summary report (2004) provided a university level summary of Teaching and Learning indicators that could be used by institutions for planning and for benchmarking purposes: These include Student Satisfaction, Retention and Completion, Support Services, Financial Resources, Graduate Outcomes, Internationalisation, Reputation, Teaching Resources and Teaching Scholarship. The Carrick Institute study will update the AVCC summary for these particular indicators at the institutional level, and will provide a deeper examination of practices within the institution to include faculty, department program and individual teacher level indicators as appropriate. This allows for a more in depth analysis of the teaching and learning environment and the staff and students’ experiences. The second study is being carried out by a team from the Institute for Teaching and Learning at the University of Sydney, led by Dr Simon Barrie. The focus of this study is the use of student surveys on teaching and learning. The aim of this study is to explore current institutional Student Evaluation of Teaching (SET) practices in Australian universities with a view of developing a framework that would assist in making sense of the SET data currently collected in universities. An interim report on this study is now available from the Carrick Institute website (Barrie, Ginns & Symons, 2007).

97

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

The outcomes of these two studies with the outcomes of the three additional studies will be used to inform the development of a framework of teaching and learning indicators for use within institutions. Once established it is expected that some of these indicators would be suitable for harnessing for use at sector and national levels.

Summary of indicators of quality teaching and learning The majority of work completed on performance indicators in higher education has been undertaken with reference (explicit or implicit) to the expectations of external bodies which have an interest in performance and comparability between universities. Relatively little emphasis has been given to aspects of intra-institutional performance. This report suggests that is that it is at this level where indicators can be most usefully employed, and are most likely to lead to an enhanced learning environment which benefits students. Four dimensions of teaching practice are identified in this report: 1. Institutional climate and systems 2. Diversity and inclusivity 3. Assessment 4. Engagement and learning community Each dimension can draw on an extensive range of indicators and measures that have been shown to provide or have an impact on the quality of student learning and the student and staff experience. If information is collected and interpreted judiciously, the information will provide institutions the opportunity to review their practices and processes in a way that demonstrates effectiveness and provides directions for enhancing the quality of teaching and learning.

Framework for dimensions of quality teaching practice The four dimensions are conceptualised in the following diagram with input, process, output and outcome indicators all necessary for a more complete understanding of the institution. The different levels of involvement of the institution and the people in the institution are identified as critical for it is the people who must provide the commitment and the engagement with the process if change is to take place.

98

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Section 3: Indicators of quality teaching and learning

Once established some of these indicators will be suitable for harnessing for use at the sector and national level. However, it must be recognised that all measures and indicators at this level can only be considered to be proxy at the national/sector level. Fragmentation can occur when institutions are required to collect a battery of information that has little relevance for institutional practice or interpretation, and does not relate to teaching quality or offer little direction for improvement.

Conclusion If there is to be real engagement of higher education institutions in developing and implementing teaching and learning indicators, then the focus needs to be on quality enhancement at the institutional level. Once the measures and indicators are established in institutions, judicious selection of some of these can then be considered for inclusion at the sector and national levels.

99

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

REFERENCES AASCU (American Association of State Colleges and Universities). Value-added assessment. Accountability’s new frontier. Retrieved March 13, 2007 from: http://www.aascu.org/pdf/06_perspectives.pdf Access Economics Pty Ltd. (2005). Report for The Department of Education, Science and Training (DEST). Review of Higher Education outcome performance indicators. Canberra: Commonwealth Department of Education, Science and Training. Retrieved January 23, 2007 from: http://www.dest.gov.au/sectors/higher_education/publications_resources/profiles/review_hi ghered_outcome_perf_indicators.htm Ako Aotearoa, National Centre for Tertiary Teaching Excellence website (2007). Available at, http://www.nctte.ac.nz/ American Association for Higher Education (AAHE). (1998). American College Personnel Association. National Association of Student Personnel Administrators. Powerful partnerships. A shared responsibility for learning. A joint report. Retrieved February 23, 2007 from: http://www.aahe.org/teaching/tsk_frce.htm Anderson, G. (2006). Assuring quality/resisting quality assurance: academics’ responses to ‘quality’ in some Australian universities. Quality in Higher Education, 12 (2), 161-173. Antonio, A. L. (2001). The role of interracial interaction in the development of leadership skills and cultural knowledge and understanding. Research in Higher Education, 42(5), 593-617. Association of American Colleges & Universities. (AACU). (2006). College learning for the new global century. A report from the National Leadership Council for Liberal Education & America’s Promise. Executive Summary. Retrieved February 23, 2007 from: http://aacusecure.nisgroup.com/advocacy/leap/documents/GlobalCentury_ExecSum_final.pdf Astin, A. W. (1979). Four critical years: effects of college on beliefs, attitudes and knowledge. San Francisco: Jossey Bass. Astin, A. W. (1985). Achieving educational excellence: A critical analysis of priorities and practices in Higher Education. San Francisco: Jossey Bass. Astin, A. W. (1993a). What matters in college: four critical years revisited. San Francisco: Jossey Bass. Astin, A. W. (1996). Involvement in learning revisited: Lessons we have learned. Journal of College Student Development, 37(2), 123-134. Australian Council for Educational Research (ACER). (2001). Graduate Skills Assessment. Summary Report. Canberra: Author. 100

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Australian Council for Educational Research (ACER). (2001). Graduate Skills Assessment. Stage One Validity Study. Canberra: Author. Australian Council for Educational Research (ACER). (2007a). Australasian Survey of Student Engagement- Institution Administration Manual (draft). Australian Council for Educational Research (ACER). (2007b). Australasian Survey of Student Engagement- SEQ Items (draft). Australian Council for Educational Research (ACER). (2007c). Australasian Survey of Student Engagement- AUSSE Tables (draft). Australian Council for Educational Research (ACER). (2007d). Australasian Survey of Student Engagement- AUSSE feedback sheet. Australian Universities Quality Agency (AUQA) (2006). Audit manual v3.0. Retrieved January 24, 2007 from: http://www.auqa.edu.au/qualityaudit/auditmanuals/auditmanual_v3/audit_manual_3.pdf Australian Universities Quality Agency (AUQA) (2007). Audit manual v4.1. August. AUQA, Melbourne. Australian Vice Chancellors Committee (AVCC). (2006). Enhancing the Learning and Teaching Performance Fund. An AVVC Proposal. Baird, L. L. (1988). Value-Added: Using Student Gains as Yardsticks of Learning. In C. Adelman (Ed.), Performance and Judgement: Essays on Principles and Practice in the Assessment of College Student Learning. Washington, D.C.: U.S. Government Printing Office. Baker, S. & Pomerantz, N. (2000-2001). Impact of learning communities on retention at a metropolitan university. Journal of College Student Retention, 2(2), 115-126. Banta, T. W. & Pike, G. R. (2007). Revisiting the blind alley of value added. Assessment Update, 19 (1), 1-15. Barr, R. (1995). From teaching to learning – A new paradigm for undergraduate education. Change, 27(6), 12-26. Barrie, S., Ginns, P., & Symons, R. (2007). Student surveys on teaching and learning: Interim Report, May. Carrick Institute for Learning and Teaching in Higher Education. http://www.carrickinstitute.edu.au Beasley, C. (1997). Students as teachers: The benefits of peer tutoring. In Pospisil, R. and Willcoxson, L. (Eds.), Learning through Teaching, p21-30. Proceedings of the 6th Annual Teaching Learning Forum, Murdoch University, February 1997. Perth: Murdoch University. Retrieved April 13, 2007 from: http://lsn.curtin.edu.au/tlf/tlf1997/beasley.html

101

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Belcheir, M. J. (2001). What predicts perceived gains in learning and in satisfaction. Research Report 2001-02, Boise State University, ID. Office of Institutional Assessment. ERIC NO.: ED480921. Benjamin, R. & Chun, M. (2003). A new field of dreams. The Collegiate Learning Assessment Project. Peer Review, Summer 2003, 26-29. Berger, J. B. (2002). The influence of the organisational structures of colleges and universities on college student learning. Peabody Journal of Education, 77(3), 40-59. Blanc, R., DeBuhr, L. & Martin, D. (1983). Breaking the attrition cycle: The effects of supplemental instruction on undergraduate performance and attrition. Journal of Higher Education, 54(1), 80-90. Blanc, R. & Martin, D. (1994). Supplemental Instruction: Increasing student performance and persistence in difficult academic courses. Academic Medicine, 69(6), 452-454. Bologna Declaration (1999). Joint declaration of the European Ministers of Education. The European Higher Education Area. Retrieved February 12, 2007 from: http://www.bolognaberlin2003.de/pdf/bologna_declaration.pdf Bologna Working Group on Qualifications Frameworks (2005). A framework for qualifications of the European Higher Education Area. Ministry of Science, Technology and Innovation. Retrieved February 12, 2007 from: http://www.bologna-bergen2005.no/Docs/00Main_doc/050218_QF_EHEA.pdf Bonnet, G. (2002). Reflections in a Critical Eye [1]: on the pitfalls of international assessment. Knowledge and skills for life: first results from PISA 2000. Assessment in Education, 9 (3), 387-399. Bormans, M. J., Brouwer, R., In’t Veld, R. J. & Mertens, F. J. (1987). The role of performance indicators in improving the dialogue between government and universities. International Journal of Institutional Management in Higher Education, 11(2), 181-193. Bowden, R. (2000). Fantasy higher education: university and college league tables. Quality in Higher Education, 6 (1), 41-60. Braun, H. I. (2005). Using student progress to evaluate teachers: a primer on valueadded models. Educational Testing Service (ETS). Policy Information Center. Braxton, J. M. (2006). Faculty professional choices in teaching that foster student success. Commissioned Paper for the National Symposium on Postsecondary Student Success. Brennan, J. & Shah, T. (2000). Managing quality in higher education: An international perspective on institutional assessment and change. Buckingham: The Society for Research into Higher Education and Open University Press.

102

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Bruwer, J. (1998). First destination graduate employment as key performance indicator: Outcomes assessment perspectives. Melbourne, Australia: Paper presented at Australian Australasian Association for Institutional Research (AAIR) Annual Forum. Buonaura, C. C. & Nauta, P. D. (2004). An approach to accreditation: The path of the Italian higher education. In P. D. Nauta, P. Omar, A. Schade & J. P. Scheele (Eds.), Accreditation models in higher education: Experiences and perspectives. European Network for Quality Assurance in Higher Education. Retrieved April 4, 2007 from: http://www.enqa.eu/files/ENQAmodels.pdf Burke, J. C. & Minassians, H. (2001). Linking state resources to campus results: From fad to trend. The fifth annual survey (2001). Albany, New York: The Nelson A. Rockefeller Institute of Government. Burke, J. C. & Minassians, H. (2002a). Performance Reporting: The preferred “No Cost” Accountability Program. The sixth annual report (2002). Albany, New York: The Nelson A. Rockefeller Institute of Government. Burke, J. C., Minassians, H. & Yang, P. (2002b). State performance reporting indicators: What do they indicate? Planning for Higher Education, 31 (1), 15-29. Cabrera, A.F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for assessing classroom teaching practices and student learning: The case of engineering. Research in Higher Education, 42(3), 327-352. CAE (Council for Aid to Education). (n.d.). Collegiate Learning Assessment. CLA in context. Retrieved February 13, 2007 from: http://www.cae.org/content/pdf/CLA.in.Context.pdf Canadian Education Statistics Council. (2006). Education indicators in Canada: Report of the Pan-Canadian Education Indicators Programme 2005. Ontario: Canadian Education Statistics Council. Carini, R. M., Kuh, G. D. & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47(1), 1-32. Chalmers, D., Weber, R., MacDonald, D., Herbert, D, Bahr, N., Terry, D., Lipp, O., McLean.J., Hannam. R. (2003). Teaching large classes, Final Report to the AUTC, March, 2003 Chang, M. J. (2002). Preservation or transformation: Where’s the real educational discourse on diversity? Review of Higher Education, 25(2), 125-140. Chang, M. J. & Astin, A. W. (1997). Who benefits from racial diversity in higher education? Diversity Digest, Winter 1997. Retrieved May 2, 2007 from: http://www.diversityweb.org/Digest/W97/research.html Chickering, A. W. & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3-7.

103

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. In IHEP (Ed.) College and university ranking systems. Global perspectives and American challenges. Clerehan, R., Chanock, K., Moore, T. & Prince, A. (2003). A testing issue: key skills assessment in Australia. Teaching in Higher Education, 8 (2), 279-284. Coates, H. (2005). The value of student engagement for higher education quality assurance. Quality in Higher Education, 11(1), 25-36. Coates, H. (2006b). Universities on the catwalk: Modeling performance in higher education. Paper presented Australasian Association for Institutional Research Annual Forum, Coffs Harbour, NSW. Coates, H. (2007a). Excellent measures precede measures of excellence. Journal of Higher Education Policy Management, 29 (1), 87-94. Codling, A. & Meek, V. L. (2006). Twelve propositions on diversity in higher education. Higher Education Management and Policy, 18 (3), 1-24. College Board, SAT program. http://www.collegeboard.com/prof/ Committee for Quality Assurance in Higher Education (CQAHE) (1995). Report on 1994 quality reviews. Canberra: Australian Government Publishing Service. Conway, C. (2001). The 2000 British Columbia Universities early leavers survey. The University Presidents’ Council of British Columbia. Centre for Education Information. Retrieved June, 1, 2007 from http://www.tupc.bc.ca/publications/ Cooper, T. (2002). Why student retention fails to assure quality. HERDSA. Retrieved May 5, 2007 from: http://www.ecu.edu.au/conferences/herdsa/main/papers/ref/pdf/CooperT2.pdf The Cooperative Institutional Research Program (CIRP). Retrieved September, 15, 2007. http://www.gseis.ucla.edu/heri/cirp.html. Couchman, J. (1997). Supplemental instruction: peer mentoring and student productivity. Paper presented at the Annual Conference of the Australian Association for Research in Education, Brisbane. See abstract at: http://www.aare.edu.au/97pap/coucj521.htm Council for the Renewal of Higher Education (2007). Available from website, http://www.rhu.se/index_eng.htm Creech, J. D. (2000). Linking higher education performance indicators to goals. Atlanta: Southern Regional Education Board. Retrieved April 11, 2007 from: http://www.nmefdn.org/uploads/LinkingHigherEd%20tech%20assistance.pdf

104

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Crosier, D., Purser, L. & Smidt, H. (2007). An EUA report. Trends V: Universities shaping the European Higher Education Area. European Universities Association (EUA). Retrieved May 25, 2007 from: http://www.eua.be/fileadmin/user_upload/files/Publications/Final_Trends_Report__May_10 .pdf Del Favero, M. (2002). Linking administrative behavior and student learning: The learning centered academic unit. Peabody Journal of Education, 77(3), 60-84. Department of Education, Science and Training (DEST). (2005c). Learning and Teaching Performance Fund. Future Directions. Discussion paper. Retrieved May 24, 2007 from: http://www.dest.gov.au/NR/rdonlyres/645A5662-D277-49DC-A59A937022D5A0F1/10279/LTPF_DiscussionPaper_2006.pdf Department of Education, Science and Training (DEST). (2006). Learning and Teaching Performance Fund Advisory Group. Advisory Group Report. Available at: http://www.dest.gov.au/NR/rdonlyres/3059183E-F359-412E-A3D7FC6BCEC1F843/12913/AdvisorygroupReporttoMinister.pdf Department of Education, Science and Training (DEST). (2007). Institution assessment framework information collection. Instructions. Canberra: Higher Education Group. Dickeson, R. (2006). The need for accreditation reform. A national dialogue: The Secretary of Education’s Commission on the Future of Higher Education. Retrieved March 8, 2007 from: http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/dickeson.pdf Dill, D. D. & Soo, M. (2005). Academic quality, league tables, and public policy: A crossnational analysis of university ranking systems. Higher Education, 49, 495-533. Donald, J. G. (2000). Indicators of success: from concepts to classrooms. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA, April 24-28, 2000. ERIC No.: ED441858. Doyle, W. R. (2006). State accountability policies and Boyer’s Domains of Scholarship: Conflict or collaboration? New Directions for Institutional Research, Spring 129, 97-113. Dwyer, C. A., Millett, C. M. & Payne, D. G. (2006). A Culture of Evidence: Postsecondary Assessment and Learning Outcomes. Princeton, N.J.: Educational Testing Service. Eaton, J. S. (2006). An overview of U.S. accreditation. Council for Higher Education Accreditation (CHEA). Retrieved January 30, 2007 from: http://www.chea.org/pdf/overviewAccred_rev0706.pdf Etter, E., Burmeister, S. & Elder, R. (2001). Improving student performance and retention via supplemental instruction. Journal of Accounting Education, 18(4), 355-368. European Association for Quality Assurance in Higher Education (2005). Standards and guidelines for quality assurance in the European Higher Education Area. Helsinki, Finland.

105

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Retrieved March 5, 2007 from: http://www.bologna-bergen2005.no/Docs/00Main_doc/050221_ENQA_report.pdf European Centre for Higher Education (2003). Trends and developments in higher education in Europe. United Nations Educational, Scientific and Cultural Organization. Feldman, K. A. (1976). The superior college teacher from the students’ view. Research in Higher Education, 5, 243-288. Feldman, K. A. (1989). Instructional effectiveness of college teachers as judged by teachers themselves, current and former students, colleagues, administrators, and external (neutral) observers. Research in Higher Education, 30, 113-135. Filinov, N. B. & Ruchkina, S. (2002). The ranking of higher education institutions in Russia: some methodological problems. Higher Education in Europe, 27 (4), 407-421. Finocchietti, C. & Capucci, S. (2003). Accreditation in the Italian university system. In S. Schwarz & D. F. Westerheijden (Eds.), Accreditation in the framework of evaluation activities: A comparative study in the European Higher Education Area. European Network for Quality Assurance in Higher Education. Retrieved April 4, 2007 from: http://www.cimea.it/servlets/resources?contentId=2817&resourceName=Inserisci%20alleg ato Gaither, G., Nedwek, B. P. & Neal, J. E. (1994). Measuring up. ASHE-ERIC Higher Education Report No.5; Washington, D.C. Gansemer-Topf, A., Saunders, K., Schuh, J. & Shelley, M. (2004). A study of resource expenditures and allocation at DEEP colleges and universities: Is spending related to student engagement? Educational Leadership and Policy Studies, Iowa State University. Retrieved February 22, 2007 from: http://nsse.iub.edu/pdf/DEEP_Expenditures_Schuh.pdf Garlick, S. & Pryor, G. (2004). Benchmarking the university: Learning about improvement. Canberra: Department of Education, Science and Training. Retrieved March 23, 2007 from: http://www.dest.gov.au/NR/rdonlyres/7628F14E-38D8-45AA-BDC62EBA32D40431/2441/benchmarking.pdf Gibbs, G., & Coffey, M. (2004). The Impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active Learning in Higher Education, 5(1), 87-100. Goldstein, H. (2004). International comparisons of student attainment: some issues arising from the PISA study. Assessment in Education, 11 (3), 319-330. Gonzalez, J. & Wagenaar, R. (2005). Tuning educational structures in Europe 2: Universities’ contribution to the Bologna process. Spain: Publicaciones de la Universidad de Deusto Apartado.

106

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Gordon, T. W., Young, J. C., & Kalianov, C. J. (2001). Connecting the freshman year experience through learning communities: Practical implications for academic and student affairs units. College Student Affairs Journal, 20(2), 37-47 Graduate Careers Australia (2006b). Enhancing the GCA national surveys: An examination of critical factors leading to enhancements in the instrument, methodology, and process. DEST: Commonwealth of Australia. Retrieved March 29, 2007 from http://www.dest.gov.au/sectors/higher_education/publications_resources/profiles/enh ancing_gca_national_surveys.htm Greenwald, R., Hedges, L. V. & Laine, R. D. (1996). The effect of school resources on student achievement. Review of Education Research, 66, 361-396. Griffin P., Coates H., McInnis, C., & James R. (2003). The development of an extended Course Experience Questionnaire. Quality in Higher Education, 9, 259-266. Guarino, C., Ridgeway, G., Chun, M. & Buddin, R. (2005). Latent variable analysis: a new approach to university ranking. Higher Education in Europe, 30 (2), 147-165. Gurin, P. Y. (1999). Expert report of Patricia Gurin, Gratz et al. v. Bollinger et al., No. 9775321, Grutier et al. v. Bollinger et al. Retrieved March 30, 2007 from: http://www.vpcomm.umich.edu/admissions/legal/expert/summ.html Gurin, P. & Nagda, B. (2006). Getting to the what, how and why of diversity on campus. Educational Researcher, 35(1), 20-24. Guthrie, J. & Neumann, R. (2006). Performance Indicators in Universities: The Case of the Australian University System. (Submission for Public Management Review Final February 2006). Guthrie, J. W., Springer, M. G., Rolle, A. R., & Houck, E. A. (2006). Modern education finance and policy. New Jersey: Allyn & Bacon. Hamrick, F. A., Schuh, J. H. & Shelley, M. C. (2004). Predicting higher education graduation rates from institutional characteristics and resource allocation. Education Policy Analysis Archives, 12(19). Hanushek, E. A. (1989). The impact of differential expenditure on school performance. Educational Researcher, 18, 45-51. Hanushek, E. A. (1997). Assessing the effects of school resources on student performance: An update. Educational Evaluation and Policy Analysis, 19, 141-164. Harman, G. & Meek, V. (2000). Repositioning quality assurance and accreditation in Australian higher education. Canberra: Department of Education, Science and Technology. Retrieved March 22, 2005 from: http://www.dest.gov.au/highered/pubgen/pubsalph.htm#Repositioning

107

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Harvey, L., Green, H. & Burrows, A. (1993). Assessing quality in higher education: A transbinary research project. Assessment and Evaluation in Higher Education, 18(2), 143-148. Harvey, L. & Newton, J. (2004). Transforming quality evaluation. Quality in Higher Education, 10 (4), 149-165. Hattendorf, L. C. (1996). Educational rankings of higher education: fact or fiction? Paper presented at the International Conference on Assessing Quality in Higher Education (8th, Queensland, Australia, July 15, 1996). ERIC No. ED401785. Hattie, J. (2005). What is the nature of evidence that makes a difference to learning? Paper presented at The Australian Council for Educational Research Annual Conference on Using Data to Support Learning, Melbourne, Australia. Retrieved March 29, 2007 from: www.acer.edu.au/workshops/conferences.html#past Haug, G. & Tauch, C. (2001). Trends in learning structures in higher education 2. Follow-up report prepared for the Salamanca and Prague conferences of March/May 2001. Finnish National Board of Education, Finland. Hearn, J. C. (2006). Student success: What research suggests for policy and practice. Executive summary. National Symposium on Postsecondary Student Success. National Postsecondary Education Cooperative. Retrieved February 15, 2007 from: http://nces.ed.gov/npec/pdf/synth_Hearn.pdf Hersh, R. H. (2006). A test of leadership. Higher education’s need to reclaim learning and accountability. Draft paper presented to MHEC/SHEEO Summit Indianapolis, Indiana, November 14, 2006. Retrieved February 13, 2007 from: http://www.cae.org/content/pdf/Hersh.ATestofLeadership.pdf Higher Education Academy (2006). The UK Professional Standards Framework for teaching and supporting learning in higher education. Retrieved May 16, 2007 from http://www.heacademy.ac.uk/professionalstandards.htm Higher Education Funding Council for England (HEFCE). December, 1999/66. Performance indicators in higher education in the UK. Retrieved August 20, 2007 from http://www.hefce.ac.uk/pubs/hefce/1999/99_66/main.htm Higher Education Funding Council for England (HEFCE) June 2007/14. Review of performance indicators: Outcomes and decisions. Retrieved August 20, 2007 from http://www.hefce.ac.uk/pubs/hefce/2007/07_14/ Ho, A., Watkins, D. & Kelly, M. (2001). The conceptual change approach to improving teaching and learning: an evaluation of a Hong Kong staff development programme. Higher Education, 42(2), 143-169.

108

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Hodges, R., Dochen, C., & Joy, D. (2001). Increasing students’ success: When supplemental instruction becomes mandatory. Journal of College Reading and Learning, 31(2), 143-156. Horsburgh, M. (1999). Quality monitoring in higher education: the impact on student learning. Quality in Higher Education, 5 (1), 9-25. Hounsell, D., & Entwistle, N. (2005). Enhancing teaching-learning environments in undergraduate courses. Final Report to the Economic and Social Research Council on TLRP Project L139251099. Retrieved March 3, 2007 from: http://www.tla.ed.ac.uk/etl/docs/ETLfinalreport.pdf House, J. D. & Kuchynka, S. J. (1997). The effects of a freshman orientation course on the achievement of health science students. Journal of College Student Development, 38(5), 540-542. Hoyt, D. P. & Lee, E. (2002). Teaching Styles and Learning Outcomes. Manhattan: IDEA Centre. Hu, S. & Kuh, G. D. (2003b). Diversity experiences and college student learning and personal development. Journal of College Student Development, 44(3), 320-334. Hursh, D. (2005). The growth of high-stakes testing in the USA: accountability, markets and the decline in educational quality. British Educational Research Journal, 31 (5), 605-622. Hurtado, S. (1996). How diversity affects teaching and learning climate of inclusion has a positive effect on learning outcomes. Diversity Digest, Fall 1996. Retrieved May 2, 2007 from: http://www.diversityweb.org/research_and_trends/research_evaluation_impact/benefits_of _diversity/sylvia_hurtado.cfm Hurtado, S. Milem, J. E., Clayton-Pedersen, A. R. & Allen, W. (1998). Enhancing campus climates for racial/ethnic diversity: Educational policy and practice. Review of Higher Education, 21(3), 279-302. Hurtado, S. Milem, J. E., Clayton-Pedersen, A. R. & Allen, W. (1999). Enacting diverse learning environments: Improving the climate for racial/ethnic diversity in higher education. ASHE-ERIC Higher Education Report, 26 (8). Washington, DC: George Washington University. Hurtado, S., Dey, E. L., Gurin, P. Y. & Gurin, G. (2003). College environments, diversity, and student learning. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 18, pp.145-189). Dordrecht, Netherlands: Kluwer. Iezzi, D. F. (2005). A method to measure the quality on teaching evaluation of the university system: The Italian case. Social Indicators Research, 73, 459-477. Ikenberry, S. O. (1997). Defining a new agenda: Higher education and the future of America. NCA Quarterly, 71 (4), 445-450.

109

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Institute for Higher Edcuation Policy (IHEP). (Ed.) (2007). College and university ranking systems. Global perspectives and American challenges. Inoue, Y. (2005). Critical thinking and diversity experiences: A connection. Paper presented at the American Educational Research Association 2005 AERA Annual Meeting, April 11-15, Montréal, Québec, Canada. James, R. & Baldwin, G. (2002). Nine principles guiding teaching and learning in the University of Melbourne: A framework for a first-class teaching and learning environment. Melbourne: Centre for the Study of Higher Education. Jenkins, A., Healey, M., & Zetter, R. (2007). Linking teaching and research in disciplines and departments. The Higher Education Academy. http://www.heacademy.ac.uk/rtnexus.htm Johnson, J. L. (2000-2001). Learning communities and special efforts in the retention of university students: What works, what doesn’t and is the return worth the investment? Journal of College Student Retention, 2(3), 219-238. Joint Quality Initiative (JQI) (2004). Shared ‘Dublin’ descriptors for short cycle, first cycle, second cycle and third cycle awards: A report from a Joint Quality Initiative informal group. Retrieved March 5, 2007 from: http://www.unidue.de/imperia/md/content/bologna/dublin_descriptors.pdf Kember, D., Lueng, D. Y. P., & Kwan, K. P. (2002). Does the Use of Student Feedback Questionnaires Improve the Overall Quality of Teaching? Assessment & Evaluation in Higher Education, 27(5), 411–425. Kilpatrick, S., Barret, M. & Jones, T. (2003). Defining learning communities. Retrieved May 31, 2007 from: http://www.aare.edu.au/03pap/jon03441.pdf Kirsch, I., de Jong, J., Lafontaine, D., McQueen, J., Mendelovits, J. & Monseur, C. (2002). Reading for change. Performance and engagement across countries. Results from PISA 2000. Retrieved May 8, 2007 from: http://www.oecd.org/dataoecd/43/34/33690986.pdf Klein, S., Shavelson, R., Benjamin, R. & Bolus, R. (2007). The Collegiate Learning Assessment: facts and fantasies. Retrieved June 1, 2007 from: http://www.cae.org/content/pdf/CLA.Facts.n.Fantasies.pdf Krause, K-L., Hartley, R., James, R., & McInnis, C. (2005). The First Year Experience in Australian Universities: Findings from a Decade of National Studies. CSHE: University of Melbourne. Retrieved March 29, 2007 from: http://www.cshe.unimelb.edu.au/pdfs/FYEReport05KLK.pdf Kuh, G. D. (1993b). Ethos: its influence on student learning. Liberal Education 79(4), 22-31. Kuh, G. D. (1995). The other curriculum: Out-of-class experiences associated with student learning and personal development. Journal of Higher Education, 66(2), 123-155.

110

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Kuh, G. D. (2001). The National Survey of Student Engagement: Conceptual framework and overview of psychometric properties. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved March 30, 2007 from: http://nsse.iub.edu/html/psychometric_framework_2002.cfm Kuh, G. D. (2003a). What we’re learning about student engagement from NSSE. Change, 35(2), 24-31. Kuh, G. D. (2006). Director’s message in: Engaged learning: Fostering success for all students. Bloomington, Indiana: National Survey of Student Engagement. Annual Report 2006. Kuh, G. D., Pace, C. R., & Vesper, N. (1997). The development of process indicators to estimate student gains associated with good practices in undergraduate education. Research in Higher Education, 38(4), 435-454. Kuh, G. D., Kienzie, J., Schuh, J. H., & Whitt, E. J. (2005). Never let it rest. Lessons about student success from high-performing colleges and universities. Change, 37(4), 44-51. Kulic, J. & Kulic, C. L. (1979). College teaching, In P. Peterson & H. Walberg (eds.), Research on Teaching: Concepts, Findings and Implications, pp. 70-93. Berkelely CA: McCutcheon. Land, M. L. (1979). Low-inference variables of teacher clarity: Effects on student concept learning. Journal of Educational Psychology, 71, 795-799. Landgraf, K. (2005). Cover letter accompanying the distribution of Braun (2005) report. Lietz, P. (1996). Learning and writing difficulties at the tertiary level: their impact on first year results. Studies in Educational Evaluation, 22(1), 41-57. Linke, R. D. (1991). Performance indicators in higher education: Report of a trial evaluation study, 1. Canberra: Department of Employment, Education and Training. Liu, N. C. & Cheng, Y. (2005). The academic ranking of world universities – methodologies and problems. Higher Education in Europe, 30 (2), 127-136. Lizzio, A., Wilson, K. & Simons, R. (2002). University students’ perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27(1), 27-52. Macdonald, I. (2001). The teaching community: recreating university teaching. Teaching in Higher Education, 6(2), 154-167. Mangold, W., Bean, L., Adams, D., Schwab, W., & Lynch, S. (2002-2003). Who goes who stays: An assessment of the effect of a freshman mentoring and unit registration program on college persistence. Journal of College Student Retention, 4(2), 95-122.

111

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Maricopa Community College District (MCCCD). (2002). Integrated learning communities. Retrieved May 31, 2007 from: http://hakatai.mcli.dist.maricopa.edu/ilc/monograph/index.html Martinez, P., & Munday, F. (1998). 9,000 voices: Student persistence and drop-out in further education. (Feda Report 2(7)). London: Further Education Development Agency. Massy, W. F. (1996). Teaching and Learning Quality Process Review: The Hong Kong programme. A paper presented at the International Conference on Quality Assurance and Evaluation in Higher Education, Beijing, China, May 6, 1996. Massy, W. F. & French, N. J. (2001). Teaching and Learning Quality Process Review: What the programme has achieved in Hong Kong. Quality in Higher Education, 7 (1), 33-45. McDaniel, E. A., Dell Felder, B., Gordon, L., Hrutka, M. E. & Quinn, S. (2000). New faculty roles in learning outcomes education: The experiences of four models and institutions. Innovative Higher Education, 25(2), 143-157. McInnis, C., James, R. & Hartley, R. (2000). Trends in the first year experience in Australian Universities, 2000. Melbourne: Department of Employment, Education, Training and Youth Affairs. Retrieved April 13, 2007 from: http://www.dest.gov.au/archive/highered/eippubs/eip00_6/fye.pdf McInnis, C., Powles, M. & Anwyl, J. (1994). Australian academics’ perspectives on quality and accountability. Melbourne: University of Melbourne Centre for the Study of Higher Education. McKinnon, K., Walker, S. & Davis, D. (1999). Benchmarking: A manual for Australian universities. Canberra: Department of Employment, Education, Training and Youth Affairs. Meredith, M. (2004). Why do universities compete in the ratings game? An empirical analysis of the effects of the U.S. News and World Report college rankings. Research in Higher Education, 45 (5), 443-461. Messick, S. (1989). Meaning and values in test validation: the science and ethics of assessment. Educational Researcher, 18 (2), 5-11. Miller, M. A. & Ewell, P. T. (2005). Measuring up on college-level learning. The National Center for Public Policy and Higher Education. Retrieved March 7, 2007 from: http://www.highereducation.org/reports/mu_learning/Learning.pdf Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA). (2006). National protocols for higher education approval processes. Minkler, J. E. (2002). ERIC Review: Learning communities at the community college. Community College Review, 30(3), 46-62.

112

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Moodie, G. (2005). University rankings. Griffith University. Retrieved May 9, 2007 from: http://www.griffith.edu.au/vc/staff/moodie/pdf/05atem3.pdf Munoz, M.A., & Eggington, E. (1999). Comparison of indicators of educational quality among institutions of higher education in El Salvador. ERIC No. ED462886. National Board of Employment, Education and Training (NBEET). (1995). The promotion of quality and innovation in higher education: Advice of the Higher Education Council on the use of discretionary funds. Canberra: Australian Government Publishing Service. Retrieved March 23, 2007 from: http://www.dest.gov.au/NR/rdonlyres/03499333-CF8E-41C0-A58AE781FD28ED77/3916/95_25.pdf National Center for Education Statistics (2007). Surveys and programs: Postsecondary. Available from National Center for Education Statistics website, http://nces.ed.gov/surveys/SurveyGroups.asp?Group=2 National Center for Public Policy and Higher Education (2006). Measuring Up 2006: The national report card on higher education. Retrieved February 13, 2007 from: http://measuringup.highereducation.org/_docs/2006/NationalReport_2006.pdf National Survey of Student Engagement. (2000). The college student report. The NSSE 2000 report: National benchmarks of effective educational practice. Retrieved February 21, 2007 from: http://nsse.iub.edu.pdf/NSSE%202000%20National%20Report.pdf National Survey of Student Engagement. (2006). Annual report 2006. Engaged learning: Fostering success for all students. Retrieved May 1, 2006 from: http://nsse.iub.edu/NSSE_2006_Annual_Report/docs/NSSE_2006_Annual_Report.pdf National Survey of Student Engagement (2007a). About NSSE. Retrieved March 30, 2007 from http://nsse.iub.edu/html/quick_facts.cfm National Survey of Student Engagement (2007b). What is BCSSE?. Retrieved March 30, 2007 from http://bcsse.iub.edu/about.cfm National Survey of Student Engagement (2007c). Publications and Presentations. Retrieved March 30, 2007 from http://nsse.iub.edu/html/pubs.cfm?viewwhat=Research%20Paper National Center for Educational Statistics (NCES). (2006). Teaching science in five countries: results from the TIMSS 1999 video study. Statistical analysis report. Retrieved May 10, 2007 from: http://nces.ed.gov/pubs2006/2006011.pdf NBPTS. (2000). NBPTS: Career and Technical Education Standards. NBPTS Nelson Laird, T. F. (2005). College students’ experiences with diversity and their effects on academic self-confidence, social agency and disposition toward critical thinking. Research in Higher Education, 46(4), 365-387.

113

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

New Zealand Cabinet Office (2006). November cabinet paper and minutes. Retrieved April 27, 2007 from: http://www.tec.govt.nz/upload/downloads/cabinet-paper-2-April.pdf New Zealand Vice-Chancellors’ Committee (2007). Quality assurance. Available from New Zealand Vice-Chancellors’ Committee website, http://www.nzvcc.ac.nz/default.aspx?l=1&p=5 Newton, J. (2000). Feeding the beast or improving quality?: Academics’ perceptions of quality assurance and quality monitoring. Quality in Higher Education, 6 (2), 153-163. Newton, J. (2002). Views from below: Academics coping with quality. Quality in Higher Education, 8 (1), 39-61. Northedge, A. (2003). Rethinking teaching in the context of diversity. Teaching in Higher Education, 8, 17-32. Orrell, J. (1996). Assessment of student learning: A problematised approach. Different Approaches: Theory and Practice in Higher Education. Proceedings HERDSA Conference 1996. Perth, Western Australia, 8-12 July. http://www.herdsa.org.au/confs/1996/orrell.html Otten, M. (2003). Intercultural learning and diversity in higher education. Journal of Studies in International Education, 7(1), 12-26. Pace, C. R. (1979). Measuring outcomes of college: fifty years of findings and recommendations for the future. San Francisco: Jossey Bass. Pace, C. R. (1995). From good practices to good products: relating good practices from undergraduate education to student achievement. Paper presented at the Association for Institutional Research, Boston. Pascarella, E. T. (2001). Identifying excellence in undergraduate education. Are we even close? Change, 33 (3), 18-23. Pascarella, E. T. & Terenzini, P. T. (1991). How college affects students. San Francisco: Jossey-Bass. Pascarella, E. T. & Terenzini, P. T. (2005). How college affects students (Vol 2): a third decade of research. San Francisco: Jossey-Bass. Pascarella, E. T., Palmer, B., Moye, M. & Pierson, C. T. (2001). Do diversity experiences influence the development of critical thinking? Journal of College Student Development, 42(3), 257-271. Peat, M., Dalziel, J. & Grant, A. M. (2001). Enhancing the first year student experience by facilitating the development of peer networks through a one-day workshop. Higher Education Research and Development, 20(2), 199-215.

114

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Peterson, M. W. & Augustine, C. H. (2000). Organisational practices enhancing the influence of student assessment information in academic decisions. Research in Higher Education, 41 (1), 21-52. Phillips KPA. Victorian Qualifications Authority. (2006). Investigation of outcomes-based auditing. Final report. Pike, G. R., Kuh, G. D. & Gonyea, R. M. (2003). The relationship between institutional mission and students’ involvement and educational outcomes. Research in Higher Education, 44(2), pp.241-261. Pike, G. R. & Kuh, G. D. (2006). Relationships among structural diversity, informal peer interactions and perceptions of the campus environment. Review of Higher Education, 29(4), 425-452. PISA (2003). What PISA produces. Retrieved May 7, 2007 from: ??? Prebble, T., Hargraves, H., Leach, L., Naidoo, K., Suddaby, G. & Zepke, N. (2004). Impact of student support services and academic development programmes on student outcomes in undergraduate tertiary study: a synthesis of the research. Report to the Ministry of Education. Wellington: Ministry of Education. Retrieved March 28, 2007 from: http://www.educationcounts.edcentre.govt.nz/publications/downloads/ugradstudentoutcom es.pdf Quality Assurance Agency (2007). Quality assurance in U.K. higher education: A guide for international readers. Retrieved January 31, 2007 from: http://www.qaa.ac.uk/international/studentGuide/English_readers.asp Rainey, M. & Kolb, D. (1995). Using experiential learning theory and learning styles in diversity education. In R. Sims & S. Sims (Eds.), The importance of learning styles: Understanding the implications for learning, course design, and education. Connecticut: Greenwood Press. Ramsden, P. (1991). A performance indicator on teaching quality in higher education: The Course Experience Questionnaire. Studies in Higher Education, 16, 129-150. Ramsden, P. & Martin, E. (1996). Recognition of good university teaching: Policies from an Australian study. Studies in Higher Education, 21(3), 299-316. Rau, W., & Durand, A. (2000). The academic ethic and college grades: Does hard work help students to “make the grade”? Sociology of Education, 73(1), 19-38. Reindl, T. & Brower, D. (2001). Financing state colleges and universities: What is happening to the “public” in public higher education? Perspectives. Washington D.C.: American Association of State Colleges and Universities. Robles, H. J. (1999). The learning college – an oxymoron? Paper presented at Community College League of California. Burlingame, CA, November 20, 1999. ERIC No.: ED437100.

115

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Rojo, L., Seco, R., Martínez, M. & Malo, S. (2001). Internationalisation Quality Review presentation: Institutional Experiences of Quality Assessment in Higher Education Universidad Nacional Autonoma de Mexico (Mexico). Organisation for Economic Cooperation and Development (OECD). Retrieved April 11, 2007 from: http://www.oecd.org/dataoecd/49/4/1870961.pdf Romainville, M. (1999). Quality Evaluation of Teaching in Higher Education. Higher Education in Europe, 24(3), 414-424. ERIC No.: EJ603621. Rosenshine, B. & Furst, N. (1973).The use of direct observation to study teaching. In R.M.W. Travers (ed.), Second handbook of research on teaching. Chicago: Rand McNally. Rowe, K. & Lievesley, D. (2002). Constructing and using educational performance indicators. Background paper for Day 1 of the inaugural Asia-Pacific Educational Research Association (APERA) regional conference, ACER, Melbourne April 16-19, 2002. Available at: http://www.acer.edu.au/research/programs/documents/Rowe&LievesleyAPERAApril2 002.pdf Sanoff, A. P. (2007). The U.S News college rankings: a view from the inside. In IHEP (Ed.). (2007). College and university ranking systems. Global perspectives and American challenges. Schacter, J. & Thum, Y. M. (2004). Paying for high- and low-quality teaching. Economics of Education Review, 23, 411-430. Schade, A. (2003). Recent quality assurance activities in Germany. European Journal of Education, 38 (3), 285-290. Schilling, K. M. & Schilling, K. L. (1999). Increasing expectations for student effort. About Campus, 4(2), 4-10. Schray, V. (2006). Assuring quality in higher education: Recommendations for improving accreditation. A national dialogue: The Secretary of Education’s Commission on the Future of Higher Education. Retrieved March 8, 2007 from: http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/schray2.pdf Shanahan, M., Findlay, C., Cowie, J., Round, D., McIver, R. & Barrett, S. (1997). Beyond the ‘input-output’ approach to assessing determinants of student performance in university economics: implications from student learning centred research. Australian Economic Papers, 36 (Sep. 1997, supplement), 17-37. Sharpe, A. (2007). Comparative review of British, American and Australian national surveys of undergraduate students. York: Higher Education Academy. Retrieved March 30, 2007 from

116

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

http://www.heacademy.ac.uk/documents/National_Survey_Comparative_Review_Feb _2007.doc Shulman, L.S. (2002). Making differences: a table of learning. Change, 34(6), 36-44. Smart, J. C., Feldman, K. A. & Ethington, C. A. (2000). Academic disciplines: Holland’s theory and the study of college students and faculty. Nashville, TN: Vanderbuilt University Press. Smith, D. G., Gerbig, G. L., Figueroa, M. A., Watkins, G. H., Levitan, T., Moore, L. C., Merchant, P. A., Beliak, H. D. & Figueroa, B. (1997). Diversity works: The emerging picture of how students benefit. Washington, DC: American Association of Colleges and Universities. Spellings, M. (2006). A test of leadership: Charting the future of U.S. higher education. Jessup, MD: Education Publications Centre, U.S. Department of Education Stella, A. & Woodhouse, D. (2006). Ranking of Higher Education Institutions. Melbourne, VIC: AUQA Occasional Publications. Suyemoto, K. L. & Nien-chu Kiang, P. (2003). Diversity research as service learning. Academic Exchange Quarterly, 7(2), 71-75. Swedish National Agency for Higher Education (2007). Quality assurance. Available from Swedish National Agency for Higher Education website, http://www.hsv.se Swiss Confederation. (2006) Federal Department for Home Affairs (FDHA). State Secretariat for Education and Research (SER). Analysis and forecast. International ranking of universities. Retrieved May 9, 2007 from: http://www.sbf.admin.ch/htm/services/publikationen/schriften/Grundlagen/factsheets/F S18_Ranking_e_300107.pdf Tam, M. (2007). Assessing quality experience and learning outcomes. Part II: findings and discussions. Quality Assurance in Education, 15(1), 61-76. Tavenas, F. (2003). Quality Assurance: A Reference System for Indicators and Evaluation Procedures. Belgium: EUA. Tennessee Higher Education Commission. (2007). Performance funding. Available from Tennessee Higher Education Commission website, http://www.state.tn.us/thec/2004web/division_pages/ppr_pages/Policy/pprpolicyperformanc efunding.htm Terenzini, P. T., Cabrera, A. F., Colbeck, C. L., Bjorklund, S. A. & Parente, J. M. (2001). Racial and ethnic diversity in the classroom: Does it promote student learning? Journal of Higher Education, 72(5), 509-531. Thomas, L. (2002). Student retention in Higher Education: the role of institutional habitus. Journal of Educational Policy, 17(4), 423-442.

117

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

The University Presidents’ Council of British Columbia, Student Outcomes. http://www.tupc.bc.ca/student_outcomes/publications/ Tight, M. (2000). Do league tables contribute to the development of a quality culture? Football and higher education compared. Higher Education Quarterly, 54 (1), 22-42. Tinto, V. (1997). Classrooms as communities: exploring the educational character of student persistence. Journal of Higher Education, 68(6), 599-644. Tinto, V. (1998). Colleges as communities: Taking research on student persistence seriously. Review of Higher Education, 21(2), 167-177. Tinto, V. & Russo, P. (1993). A longitudinal study of the Coordinated Studies Program at Seattle Central Community College. A study by the National Center for Postsecondary Teaching, Learning, and Assessment, Syracuse University. Tinto, V. & Pusser, B. (2006). Moving from theory to action: Building a model of institutional action for student success. Commissioned paper presented at the 2006 Symposium of the National Postsecondary Education Cooperative (NPEC). Retrieved February 23, 2007 from: http://nces.ed.gov/npec/pdf/Tinto_Pusser_Report.pdf Treisman, U. (1993). The professional development program at the University of CaliforniaBerkeley. FIPSE Project Paper. Retrieved March 29, 2007 from: http://www.ed.gov/about/offices/list/ope/fipse/lessons2/cal-berk.html Trowler, P., Fanghanel, J. & Wareham, T. (2005). Freeing the chi of change: the Higher Education Academy and enhancing teaching and learning in higher education. Studies in Higher Education, 30 (4), 427-444. Umbach, P. D. & Wawrzynski, M. R. (2005). Faculty do matter: The role of college faculty in student learning and engagement. Research in Higher Education, 46(2), 153-184. Umbach, P. D. & Kuh, G. D. (2006). Student experiences with diversity at liberal arts colleges: another claim for distinctiveness. Journal of Higher Education, 77(1), 169-192. United Kingdom Department for Education and Skills (DfES). (2003). White paper: The future of higher education. Retrieved March 28, 2007 from: http://www.dfes.gov.uk/hegateway/strategy/hestrategy/pdfs/DfES-HigherEducation.pdf United States Department of Education (USDE). (2006a). A test of leadership: Charting the future of U.S. higher education (Spelling report). Retrieved April 24, 2007 from: http://www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf United States Department of Education (USDE). (2006b). Fund for the Improvement of Postsecondary Education (FIPSE): The Comprehensive Program fiscal year. Retrieved May 25, 2007 from: http://apply.grants.gov/opportunities/instructions/oppED-GRANTS051407-001-cfda84.116-cid84-116B2007-2-instructions.pdf

118

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

University Grants Committee (UGC) (2007a). A note on the funding mechanisms of UGC: Formula, criteria and principles for allocating funds within UGC-funded institutions. Retrieved April 18, 2007 from: http://www.legco.gov.hk/yr0607/english/panels/ed/papers/ed0228cb2-1182-2-e.pdf University Grants Committee (UGC) (2007b). Quality Assurance Council (QAC). Available from University Grants Committee website, http://www.ugc.edu.hk/eng/qac/index.htm University of Wisconsin, Stephens Point (2000). School of Edcuation. Definition and description of the learning community concept. Retrieved May 31, 2007 from: http://www.uwsp.edu/education/lkirby/LC/1%20Definition.htm Usher, A. & Savino, M. (2007). A global survey of rankings and league tables. In IHEP (Ed.) College and university ranking systems. Global perspectives and American challenges. Vanderpoorten, M. (2003). Working on the European dimension of quality: Opening of the conference. In D. F. Westerheijden & M. Leegwater (Eds.), Working on the European dimension of quality: Report of the conference on quality assurance in higher education as part of the Bologna process. Amsterdam, 12-13 March, 2002. Retrieved April 11, 2007 from: http://www.utwente.nl/cheps/documenten/engbook03workingeuropeandimension.pdf Vaughn, J. (2002). Accreditation, commercial rankings, and new approaches to assessing the quality of university research and education programmes in the United States. Higher Education in Europe, 27 (4), 433-441. Vermunt, J. D. & Vermetten, Y. J. (2004). Patterns in student learning: relationships between learning strategies, conceptions of learning, and learning orientations. Educational Psychology Review, 16(4), 359-384. Vidovich, L. & Currie, J. (2006). Ongoing tensions in quality policy processes: a meta level view. Proceedings of the Australian Universities Quality Forum 2006. AUQA Occasional Publication. Ward, D. (2007). Academic values, institutional management and public policies. Higher Education Management and Policy, 19 (2), 1-12. Washington Center for Improving the Quality of Undergraduate Education. (n.d.). Learning communities. National resource center. Retrieved May 31, 2007 from: http://www.evergreen.edu/washcenter/lcfaq.htm Westerheijden, D. F. (2003). Movements towards a European dimension in quality assurance and accreditation. In D. F. Westerheijden & M. Leegwater (Eds.), Working on the European dimension of quality: Report of the conference on quality assurance in higher education as part of the Bologna process. Amsterdam, 12-13 March, 2002. Retrieved April 11, 2007 from: http://www.utwente.nl/cheps/documenten/engbook03workingeuropeandimension.pdf

119

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education References

Williams, R. & Van Dyke, N. (2004). The international standing of Australian universities. Melbourne Institute Report No. 4. Melbourne Institute of Applied Economic and Social Research. Williford, M., Chapman, L. C. & Kahrig, T. (2000-2001). The university experience course: A longitudinal study of student performance, retention, and graduation. Journal of College Student Retention, 2(4), 327-340. Wilson, K., Lizzio, A. & Ramsden, P. (1997). The development, validation and application of the Course Experience Questionnaire. Studies in Higher Education, 22(1), 33-53. Yorke, M. (1991). Performance indicators: towards a synoptic framework. Higher Education, 21(2), 235-248. Yorke, M. (1997). Can performance indicators be trusted? Revised version of a paper presented at the Association for Institutional Research Forum. ERIC No.: ED 418 660 Yorke, M. (2000). Smoothing the transition into higher education: What can be learned from student non-completion. Journal of Institutional Research, 9, 78-88. Yorke, M., & Longden, B. (2007). The first year experience in higher education in the UK: Report on phase 1 of a project funded by the Higher Education Academy. York: Higher Education Academy. Retrieved March 30, 2007 from http://www.heacademy.ac.uk/research/FirstYearExperience.pdf Young, S. & Shaw, D. G. (1999). Profiles of effective college and university teachers. The Journal of Higher Education, 70(6), 670-686. Zeegers, P. (1994). First year university science – revisited. Research in Science Education, 24(1), 382-383. Zeegers, P. & Martin, L. (2001). A learning-to-learn program in a first-year chemistry class. Higher Education Research & Development, 20(1), 35-52. Zhao, C.-M. & Kuh, G. D. (2004). Adding value: learning communities and student engagement. Research in Higher Education, 45(2), 115-138.

120

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education

CARRICK INSTITUTE FOR LEARNING AND TEACHING IN HIGHER EDUCATION

LIST OF ACROMYMS AASCU AAUT ACER ACT AEI AGS AQF ATN AUSSE AUQA AVCC AVED AYP BCSSE CAAUT CAE CCSSE CEQ CELTs CHE CISO CLA CNVSU CQAHE CSEQ CSS CSXQ CUES-I DeSeCo DEST DLHE ECA ESC EU FIPSE FYEQ GCA GDS GPA GRE GSA HAC HEA HEFCE HESA 121

American Association of State Colleges and Universities Awards for Australian University Teaching Australian Council for Educational Research American College Testing Assessment Australia Education International Australian Graduate Survey Australian Qualifications Framework Australian Technology Network Australasian Survey of Student Engagement Australian Universities Quality Agency Australian Vice-Chancellor’s Committee Ministry of Advanced Education (BC, Canada) Adequate Yearly Progress (USA) Beginning College National Survey of Student Engagement (USA) Carrick Awards for Australian University Teaching Council for Aid to Education Community Colleges Survey of Student Engagement (USA) Course Experience Questionnaire Centres of Teaching Excellence Centre for Higher Education Development (Germany) British Columbia College and Institute Student Outcomes Survey Collegiate Learning Assessment (USA) National Committee for the Evaluation of the University System (Italy) Committee for Quality Assurance in Higher Education College Student Experience Questionnaire (Australia) College Student Survey (Australia) College Student Expectations Questionnaire (USA) College and University Environments Inventory (Taiwan) Definition and Selection of Competencies Department of Education, Science and Training (Australia) Destinations of Leavers from Higher Education (UK) European Consortium for Accreditation Education Support Centre (Australia) European Union Fund for the Improvement of Post-Secondary Education First Year Experience Questionnaire Graduate Careers Australia Graduate Destinations Survey (Australia) Grade Point Average Graduate Record Examination (Australia) Graduate Skills Assessment (Australia) Hungarian Accreditation Committee Higher Education Academy (UK) Higher Education Funding Council for England Higher Education Statistics Agency (UK)

Review of Australian and international performance indicators and measures of quality of teaching and learning in higher education Acronyms

HSSE IAF IEA IREG JQI LTPF LSSE MCAT MCEETYA NAAL NBEET NCES NLTF NSS NSSE NTFS NVAO NZVCC OECD OWG PIRLS PISA PREQ QAA QAC RAND RQF SAAL SAT SEQ SFQ SJT SPU TAAS TDGs THES TIMSS TLQPR TQI UGC US USA UK VAAI WHOO

122

High School Survey of Student Engagement (USA) Institutional Assessment Framework (Australia) International Association for the Evaluation of Educational Achievement International Rankings Expert Group Joint Quality Initiative (Europe) Learning and Teaching Performance Fund (Australia) Law School survey of Student Engagement (USA) Medical College Admissions Test (USA) Ministerial Council on Employment, Education, Training and Youth Affairs (Australia) National Assessment of Adult Literacy (USA) National Board of Employment, Education and Training National Centre for Education Statistics National Learning and Teaching Fund (Australia) National Student Survey (UK) National Survey of Student Engagement (USA) National Teaching Fellowship Scheme (UK) Nederlands-Vlaamse Accreditatieorganisatie (Netherlands) New Zealand Vice-Chancellors’ Committee Organisation for Economic Co-operation and Development (Europe) Outcomes Working Group (BC, Canada) Trends Progress in International Reading Literacy Study (USA) Programme for International Student Assessment Postgraduate Research Experience Questionnaire (Australia) Quality Assurance Agency (UK) Quality Assurance Council (Hong Kong) Research and Development (USA) Research Quality Framework (Australia) State Assessment of Adult Literacy (USA) College Admission Test (USA) Student Engagement Questionnaire (Australia) Short Form Questionnaire (Italy) Shanghai Jiao Tong University Ranking of World Universities Student Progress Unit (Australia) Texas Assessment of Academic Skills Teaching Development Grant Times Higher Education Supplement Trends in International Mathematics and Science Study Teaching and Learning Quality Process Reviews Teaching Quality Information University Grants Committee United States United States of America United Kingdom Value Added Assessment Initiative (USA) Higher Education and Research Act