Interpretation Evaluation Tool Kit - PATA Sustainability & Social ...

41 downloads 1464 Views 743KB Size Report
8. SECTION 2: WHAT'S IN YOUR INTERPRETATION EVALUATION TOOL KIT ..... make visitors want to stay longer at your site or operation. This may.
Interpretation Evaluation Tool Kit METHODS AND TOOLS FOR ASSESSING THE EFFECTIVENESS OF FACE-TO-FACE INTERPRETIVE PROGRAMS

By Sam H. Ham and Betty Weiler

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

DISCLAIMER The views contained in this publication are those of the authors and do not necessarily represent the views of the Sustainable Tourism Cooperative Research Centre (STCRC). While the authors have made all reasonable efforts to gather the most current and appropriate information, the STCRC does not make any warranty as to the correctness, completeness or suitability of the information, and shall in no event be liable for any loss or damage that you may suffer as a result of your reliance on this information. CONTACT INFORMATION To order copies of this Guide or the Summary, please visit our online bookshop [www.crctourism.com.au/bookshop] or alternatively complete and send the online order form.

COPYRIGHT © CRC FOR SUSTAINABLE TOURISM PTY LTD 2005 All rights reserved. Apart from fair dealing for the purposes of study, research, criticism or review as permitted under the Copyright Act, no part of this book may be reproduced by any process without written permission from the publisher. Any enquiries should be directed to Brad Cox, Communications Manager [[email protected]] or Trish O’Connor, Publishing Manager [[email protected]].

ii

INTERPRETATION EVALUATION TOOL KIT

Contents PREFACE ______________________________________________________________________________ IV ACKNOWLEDGMENTS __________________________________________________________________ VI SECTION 1: INTRODUCTION ______________________________________________________________ 1 INDICATORS OF INTERPRETIVE EFFECTIVENESS IN THIS TOOL KIT ______________________________________ WHO THE TOOL KIT IS FOR _________________________________________________________________ THREE PACKAGES CUSTOMISED FOR THREE TYPES OF SETTINGS _____________________________________ WHAT THE TOOL KIT IS AND WHAT IT IS NOT ____________________________________________________ HOW AND HOW NOT TO USE THIS TOOL KIT ___________________________________________________ THE BENEFITS AND COSTS OF USING THIS TOOL KIT _______________________________________________

1 5 6 7 7 8

SECTION 2: WHAT’S IN YOUR INTERPRETATION EVALUATION TOOL KIT ________________________ 9 WHAT’S ON THE CD _____________________________________________________________________ 9 SYSTEM REQUIREMENTS __________________________________________________________________ 10 INSTALLATION INSTRUCTIONS ______________________________________________________________ 10 GET PREPARED BEFORE YOU START _________________________________________________________ 11 SECTION 3: HOW TO CONDUCT THE VISITOR SURVEY ______________________________________ 13 1 – PREPARE TO CONDUCT YOUR VISITOR SURVEY ______________________________________________ 2 – GIVE THE QUESTIONNAIRES TO VISITORS FOR COMPLETION _____________________________________ 3 – RETRIEVE COMPLETED QUESTIONNAIRES FROM VISITORS _______________________________________ 4 – ENTER THE DATA FROM THE QUESTIONNAIRES INTO THE TEMPLATE ________________________________ 5 – LOOK AT YOUR RESULTS _______________________________________________________________ 6 – INTERPRET YOUR RESULTS ______________________________________________________________

14 17 23 23 31 34

SECTION 4: TAKING ACTION BASED ON RESULTS OF THE VISITOR SURVEY ____________________ 39 SECTION 5: HOW TO CONDUCT THE OBSERVATIONS_______________________________________ 44 1 – PREPARE TO CONDUCT YOUR OBSERVATIONS ______________________________________________ 44 2 – OBSERVE, LISTEN AND TALLY THE RESULTS __________________________________________________ 46 3 – INTERPRET YOUR RESULTS ______________________________________________________________ 51 SECTION 6: TAKING ACTION BASED ON RESULTS OF THE OBSERVATIONS ____________________ 52 SECTION 7: NEXT STEPS _________________________________________________________________ 54 APPENDIX A: HOW THE INDICATORS WERE SELECTED AND DEVELOPED ______________________ 55 APPENDIX B: INDICATOR GROUPINGS AND RELIABILITY COEFFICIENTS ______________________ 58 APPENDIX C: VISITOR QUESTIONNAIRES SHOWING CODES FOR DATA ENTRY ________________ 61 APPENDIX D: OBSERVATION FORM ______________________________________________________ 73 GLOSSARY OF TERMS ___________________________________________________________________ 74 AUTHORS______________________________________________________________________________ 78

iii

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

PREFACE Over the past five decades, tourism providers across the world have recognised the importance of high quality interpretation as central to their mission. Today, national parks, museums, tour operators, cruise ship companies, wineries, breweries, food processing facilities, zoos, aquaria, botanical gardens, theme parks and a wide range of other tourism operators and destination managers worldwide deliver interpretive programs to their customers. Although the benefits they hope to achieve through their interpretive programs vary from organisation to organisation, the benefits generally fall into four broad categories: enhancing visitor experiences, strengthening public relations, protecting the site from visitor impacts, and protecting visitors from on-site hazards. Interpretation that produces such benefits makes a difference in whether the organisation succeeds or fails. Being able to document the achievements of your interpretive program influences not only budgets and financial decisions, but it also provides benchmarks needed for monitoring and continually improving the interpretive services and products you offer. Our overarching purpose in developing this Tool Kit is to give you a set of practical tools that allow you, with minimum bother and complexity, to reliably and validly evaluate your interpretive offerings, and thereby enhance your effectiveness as an organisation. Although many organisations want to conduct good in-house evaluations of their interpretive programs, a number of factors have prevented them from doing so. Depending on the type of information they want, gaining good evaluative data often requires research expertise that is not available among the organisation’s staff. Consequently, many organisations have found it a challenge to find ways to collect information that is valid, reliable and actually indicative of interpretive success or failure. A result is that such in-house evaluations sometimes produce ‘results’ that mislead rather than clarify, and may ultimately lead to poor decisions about interpretive program development. This Tool Kit was developed by two social scientists who have both theoretical and practical grounding in interpretive best practice. Our goal from the outset has been to keep the Tool Kit decidedly user-friendly while building into it a strong theoretical and methodological foundation that allows you to be confident in the results it produces and the decisions you make based on them. As such, the Tool Kit does not require you or your staff to worry about the details of the research or statistical analyses that went into producing the indicators we have developed for you. But we have provided these details in an appendix for those interested in knowing more about how the Tool Kit was produced. The Tool Kit contains 11 indicators that were chosen based upon the expressed priorities of a range of industry partners. In selecting the indicators, we wanted to make sure that they: (1) reflected the types of outcomes that users of the Tool Kit actually want from their interpretive programs (for example, enhanced visitor enjoyment, positive visitor attitudes about conservation, positive word-of-mouth advertising, provoking visitors to think about the values inherent at the site, etc.), (2) were theoretically valid based on what is known about interpretation’s potential impacts on how visitors think, feel, and possibly behave with respect to the things being interpreted for them, and (3) would require from users minimal effort, expense, iv

INTERPRETATION EVALUATION TOOL KIT

and little or no social research expertise to collect and analyse the data, yet produce results that are both valid (i.e. accurately measuring what each indicator is supposed to measure) and reliable (i.e. producing consistent measurements so that you can make meaningful comparisons of your results over time and make informed judgments about the status and progress of your interpretive programs). To achieve these qualities, we relied extensively on the ongoing input of professional staff based at heritage, nature, and food & beverage destinations, and we fieldtested all of the indicators at multiple locations representing each of these settings (3 heritage sites, 2 nature sites and 2 food & beverage sites). The result was three different evaluation packages (each consisting of a 5-minute visitor survey and a form for observing visitor behaviour) that were customised for each type of setting. Appendices A and B contain additional information on the development of the three packages. A full description of the research methodology and analyses that produced each set of indicators is available in the STCRC Technical Report (in press), Development and Refinement of a Methodology and Evaluation Tools for Assessing Interpretation Outcomes, which can be obtained by contacting the authors. We hope you enjoy using the Interpretation Evaluation Tool Kit and find that it not only provides you with valuable information on which to base decisions about the continued development of your interpretive offerings, but also that it serves its intended purpose — to strengthen your use of interpretation to achieve the kind of success your organisation is trying to achieve. Sam H. Ham and Betty Weiler

v

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

ACKNOWLEDGMENTS The Sustainable Tourism Cooperative Research Centre, established by the Australian Commonwealth government, funded this research. This Tool Kit has been prepared with support and funding from numerous organisations and individuals. In particular, we would like to thank the following: Members of our Industry Reference group: • Deb Lewis (Tourism Tasmania) • Julia Clark (Port Arthur Historic Site) • Sue Drummond (Sovereign Hill Museums Association) • Pamela Harmon-Price (Environment Queensland) Organisations providing cash and in-kind contributions: • Sustainable Tourism Cooperative Research Centre (STCRC) • Tourism Tasmania • Sovereign Hill Museums Association • Port Arthur Historic Site • Monash University, Tourism Research Unit (TRU) • University of Idaho (USA), Center for International Training & Outreach (CITO) The businesses and organisations that helped us identify the outcomes they wanted for their interpretation, and then helped to trial the questionnaires: • Sovereign Hill Museums Association • Port Arthur Historic Site • Tasmanian Parks and Wildlife Service • Hobart Cruises • Boag’s Centre for Beer Lovers • Cascade Brewery • Federal Hotels & Resorts, West Coast Wilderness Railway We owe a very special debt of gratitude to Tourism Tasmania, especially Deb Lewis and Gerald Englebretsen, for taking such a deep interest in this project and for facilitating numerous opportunities and opening many doors of collaboration that have added both rigour and value to the final product. Our thanks go also to Dr Anne Hardy (University of Northern British Columbia), Canada, for her involvement in the early stages of this project, and to Luke Latimer (STCRC) and Annita Allman (Monash University TRU) for assistance with software programming and desktop publishing. And finally, we want to thank the hundreds of visitors who gave their good will and time to participate in the various field tests of the questionnaires. You are invited to contact a member of the research team if you require further clarification or you want to provide feedback on this Tool Kit. We would be happy to hear from you! Professor Sam Ham

Phone: Fax: Email:

1 208 882 5128 1 208 882 7588 [email protected]

Professor Betty Weiler

Phone:

03 9904 7104

Fax: Email:

03 9904 7225 [email protected]

If you want to purchase a copy of the Tool Kit, please contact Brad Cox, Communications Manager, Sustainable Tourism CRC [[email protected]], PMB 50 GOLD COAST MC, QLD 9726, Australia, or order it from the STCRC online Bookshop at http://www.crctourism.com.au/bookshop

vi

INTERPRETATION EVALUATION TOOL KIT

SECTION 1: INTRODUCTION

Indicators of Interpretive Effectiveness in this Tool Kit Following a rigorous selection process that incorporated expressed industry priorities, substantiated communication theory and research, extensive field-testing with several hundred visitors, and several statistical analyses to establish the validity and reliability of the data, 11 indicators were selected to form the core of the Interpretation Evaluation Tool Kit. These indicators capture a range of thinking, feeling and behavioural outcomes of interpretation, including several that have both practical and commercial implications. Many of the indicators involve multiple measurements comprising as many as 3 to 5 sub-indicators, and some of them are captured with a single measurement. In all cases, the indicators selected have acceptable reliability and validity, and are both easy and inexpensive to measure. See Appendix A for a more detailed description of the indicator selection process. The 11 indicators are labelled A through K, each capturing a different kind of dimension or outcome of effective interpretation. These are shown in the following table which lists all the indicators for each type of setting (food & beverage, heritage and nature). Note that not all of the indicators apply to all three settings and that there are some differences in how a given indicator is described in the various settings. For example, indicator A in the Heritage package is labelled as ‘Impact on current world view via empathy with historic period & people’, but it is called ‘Impact on appreciation of indigenous connections to nature’ in the Nature package. This is because the items used to measure indicator A were different in the two cases and the title of the indicator needs to reflect this difference. Likewise, the wording of indicator C is slightly different in the Heritage and Nature packages, and indicator E in the Food & Beverage package is slightly different from its usage in the Heritage and Nature packages. In addition, note that neither indicators A or C are part of the Food & Beverage package. Of the 11 overall indicators, 10 (A-J) are measured using the Visitor Questionnaire described in Section 3 (How to Conduct the Visitor Survey). Indicator K focuses on how much verbal and physical interaction occurs between a presenter and an audience, an outcome best assessed by observing and listening to actual presentations. For this reason, a separate Observation Form was developed for indicator K. This form and how to use it are described in Section 5 (How to Conduct the Observations).

1

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

TYPE OF SETTING Indicator* Food & Beverage

Heritage

Nature

Impact on current world view via empathy with historic period and people Elaboration (provoked visitors to thought)

Impact on appreciation of Indigenous connections to nature Elaboration (provoked visitors to thought)

Positive attitude toward heritage preservation

Positive attitude toward nature conservation

Global evaluation of interpretation

Global evaluation of interpretation

Global evaluation of interpretation

Desire to participate in additional interpretive activities

Desire to participate in additional interpretive activities

Desire to participate in additional interpretive activities

Desire to purchase a product or memento related to the place Desire to stay longer

Desire to purchase a memento or souvenir related directly to the site story Desire to stay longer

Desire to purchase a memento or souvenir related directly to the site story Desire to stay longer

H

Desire to return for a repeat visit

Desire to return for a repeat visit

Desire to return for a repeat visit

I

Positive word-ofmouth advertising

Positive word-of-mouth advertising

Positive word-of-mouth advertising

Interpretation was relevant and meaningful to visitors’ lives Visitors provoked to interact with the presenter

Interpretation was relevant and meaningful to visitors’ lives Visitors provoked to interact with the presenter

Interpretation was relevant and meaningful to visitors’ lives Visitors provoked to interact with the presenter

A

B

Elaboration (provoked visitors to thought)

C

D

E

F

G

J

K

* Indicator K is measured using the Observation Form. All other indicators are measured using the Visitor Questionnaire.

2

INTERPRETATION EVALUATION TOOL KIT

Indicator A: This indicator captures the degree to which your interpretation impacted visitors’ point of view about their own lives (Heritage package) and their appreciation of Indigenous feelings of connectedness to nature (Nature package). Results for this indicator give you a broad indication of whether your interpretation helps visitors to make connections and draw conclusions about these issues. This is accomplished with 5 strongly reliable measurements (sub-indicators) in the Heritage questionnaire and with 4 strongly reliable measurements in the Nature questionnaire. (If you are interested in the details of the subindicators and their corresponding questionnaire items, see Appendix B.) Indicator B:

This indicator is a measure of ‘elaboration,’ which is the amount of thinking your interpretation provokes visitors to engage in. ‘Provocation’ is considered by many experts to be the single most important outcome of interpretation. When visitors are provoked to thought, they have been moved to make connections between the topic being interpreted and what they already know and feel. Therefore, results for this indicator give you an indication of how much your interpretation has provoked people to process new thoughts about the things you interpret. This is accomplished with 5 strongly reliable measurements (sub-indicators) in the Heritage and Nature questionnaires and with 4 strongly reliable measurements in the Food & Beverage questionnaire. (If you are interested in the details of the sub-indicators and their corresponding questionnaire items, see Appendix B.)

Indicator C: This indicator captures the degree to which your interpretation led visitors to have a stronger positive attitude to heritage preservation (Heritage package) and nature conservation (Nature package). Results for this indicator give you a broad indication of whether your interpretation is leading visitors to have a stronger positive attitude toward long-term protection of the kinds of values your site represents. This is accomplished with 3 strongly reliable measurements (subindicators) in both the Heritage questionnaire and Nature questionnaire. (If you are interested in the details of the sub-indicators and their corresponding questionnaire items, see Appendix B.) Indicator D: This indicator measures visitors’ overall (global) evaluation of interpretation at your site. Specifically, it captures whether they found the interpretive activities they attended to be enjoyable, good, interesting and satisfying. Results for this indicator give you a broad sense of visitors’ overall enjoyment and satisfaction with the interpretation you offer. This is accomplished with 2 strongly reliable subindicators in the Food & Beverage questionnaire, and with 4 strongly reliable sub-indicators in both the Heritage and Nature questionnaires. (If you are interested in the details of the sub-indicators and their corresponding questionnaire items, see Appendix B.)

3

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Indicator E:

This indicator measures whether your interpretation is good enough to make visitors want to have even more. Results for this indicator give you a broad indication of whether your interpretation is stimulating visitors to want to immerse themselves more deeply in the things you are interpreting This is accomplished with a single question (3A) in all three questionnaires.

Indicator F:

This indicator measures a commercially important outcome of interpretation – whether visitors were stimulated to buy a product, memento, or souvenir that is directly related to the story interpreted at your site (such as a bottle of wine at your cellar door, box of chocolates at your factory, a postcard, clothing item or other keepsake that the visitor feels is directly related to the place). Results for this indicator give you a sense of whether your interpretation is stimulating a ‘buying impulse’ in visitors. This is accomplished with a single question (3D) in all three questionnaires.

Indicator G: This indicator measures whether your interpretation is good enough to make visitors want to stay longer at your site or operation. This may have commercial implications since visitors who actually extend their stay may spend additional money on food or in a gift shop. Results for this indicator give you a sense of whether your interpretation is contributing to visitors spending more time at the site than they had initially planned to spend This is accomplished with a single question (3B) in all three questionnaires. Indicator H: This indicator measures the degree to which your interpretation stimulates visitors to want to return for a repeat visit. Of course, there is no way of knowing whether visitors actually return to your site, but scores on this indicator would give you a broad indication of whether visitors have the desire to do so. Measurement is accomplished with a single question (3C) in all three questionnaires. Indicator I:

4

This indicator measures the degree to which your interpretation stimulates visitors to want to say positive things to another person about your site or operation (positive word-of-mouth advertising). Results for this indicator give you a broad indication of how inclined your visitors are to tell other people that your site or operation is interesting, enjoyable, worth the money and time to visit, and whether other people should visit you. This is accomplished with 5 strongly reliable subindicators (questions 2A-2E) in all three questionnaires. (If you are interested in the details of the sub-indicators and their corresponding questionnaire items, see Appendix B.)

INTERPRETATION EVALUATION TOOL KIT

Indicator J:

This indicator measures the degree to which visitors think your interpretation is relevant and meaningful to their lives. Results for this indicator give you a broad indication of whether visitors felt your interpretation connected to things they already know and care about. This is accomplished with 3 moderately reliable sub-indicators in the Food & Beverage questionnaire, and with 4 moderately reliable subindicators in the Heritage and Nature questionnaires. (If you are interested in the details of the sub-indicators and their corresponding questionnaire items, see Appendix B.)

Indicator K:

This observational indicator allows you to assess how much your activities are provoking visitors to interact with the people presenting your interpretive program (i.e. guides, interpretive staff, etc.). In a sense, this is a measure of how one-way or two-way their communication with visitors is. Results for this indicator give you a detailed indication of the amount and kinds of interaction that is going on between presenters and visitors, who is initiating it, and what kinds of responses it is causing. This is accomplished using the Observation Form and procedures detailed in Section 4 (How to Conduct the Observations).

Who The Tool Kit Is For This Interpretation Evaluation Tool Kit has been developed for people who need to know in measurable terms the degree to which their interpretive programs are accomplishing what they are intended to do. Such people typically include, but are by no means limited to, staff working in parks, museums, and other heritage and natural attractions, as well as at a range of other settings where interpretive services are offered (including cruise ships, guided tours, wineries, breweries, cheese and chocolate factories, manufacturing plants, energy production facilities, etc.). Whichever of these or other setting you work in, if you need to know whether your interpretive services are performing as you want them to perform, this Tool Kit is for you. The Tool Kit is designed for users who do not have formal education or training in social science research. It does not use methods that are highly ‘qualitative’ – you would need specialised training and skills to undertake in-depth explorations of visitors’ thoughts, perceptions, preferences and feelings about interpretation. Instead, it uses simple quantitative measurement – you do not have to know or apply any statistical methods to use this Tool Kit. All analyses and calculations are done automatically for you. You can use the methods and tools in this Tool Kit at any time of year without having to spend money on outside consultants. However, you do need to orient yourself and/or assistants to the Tool Kit’s rationale and methods.

5

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Three Packages Customised for Three Types of Settings You can use the Tool Kit to evaluate interpretation at attractions, parks, museums, on guided tours, at events, or for any other business or organisation that uses interpretation to enhance the visitor experience. The tools in the three Evaluation Tool Kit folders (for food & beverage interpretation, heritage interpretation, and nature interpretation) are designed for use in three kinds of situations: Food & beverage interpretation: If you interpret food and/or beverages, or you are an attraction that focuses mainly on interpreting a product that you sell to the public, then the Food & Beverage Questionnaire and Food & Beverage Observation Form are for you.

Examples:

a cheese or chocolate factory, restaurant, winery, cellar door, brewery or a wine or food festival

Heritage interpretation: If you are a heritage place, historic site, museum, tour operator, event or other attraction that focuses mainly on interpreting human past, then the Heritage Questionnaire and Heritage Observation Form are for you.

Examples:

a museum, a cultural visitor centre, an historic corridor, a heritage building, a sacred or historic place or a heritage festival

Nature interpretation: If you are a protected area, zoo, aquarium, wildlife park, tour operator, event or other attraction that focuses mainly on interpreting nature, then the Nature Questionnaire and Nature Observation Form are for you.

Examples:

6

a national park, a state forest, a cave tour, a river-based tour or nature-based event such as Earth Day

INTERPRETATION EVALUATION TOOL KIT

What the Tool Kit Is and What It Is Not The Tool Kit can give you results to help you identify which outcomes you are and are not achieving with your interpretation and, combined with some diagnostic analysis, you can use the results to begin to identify steps that need to be taken to enhance interpretation outcomes. The results can also be used to determine the extent to which interpretation benefits your organisation and helps to advance your management goals. For interpreters and guides, the content of the visitor questionnaires will provide a focus for exploring and reflecting on what your interpretation is meant to accomplish. While the Tool Kit can provide a rapid-response measure of ‘how well’ or ‘how much’ is being achieved, it does not provide information about the longer-term impacts of interpretation on visitors once they have returned home. The methods in this Tool Kit do not provide answers to why interpretation is or is not achieving an organisation’s desired outcomes. A more complex research design is required to assess cause-and-effect relationships and to establish what specific actions need to be taken to improve your interpretation. Nevertheless, patterns in results may suggest broad avenues for improvement or correction, and the Tool Kit highlights these.

How and How Not to Use This Tool Kit The Tool Kit is designed to be used to evaluate overall site interpretation, but it can also be used to evaluate a specific interpretive product or program. However, since the Tool Kit’s focus is strictly on interpretation, it will not allow you to assess the performance of other types of visitor services (such as how satisfied people are with the cleanliness and appearance of the setting, their opinions about the courtesy of staff, whether they liked the food, etc.). In addition, the Tool Kit is designed to evaluate face-to-face interpretation, not selfguided or non-personal media such as signs, exhibits, websites, brochures etc. However, it needs to be recognised that in an exit survey a visitor’s responses are likely to reflect their experience with the entire interpretive program, not necessarily just the face-to-face elements of it. The visitor questionnaires in the Tool Kit are intended to be used intact to measure multiple indicators of interpretation outcomes. They should not be altered or edited to target single indicators. Doing so will lead to unavoidable measurement errors that will undermine the accuracy and usefulness of your evaluation. The Tool Kit was designed as a tool for taking the ‘pulse’ of interpretation and, over a period of time, to monitor trends in what it is achieving. It can also be used when trialling a new interpretation program or product, but this must be done with caution as a range of factors can impact on the outcome of a single interpretive product at a single point in time.

7

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

This Tool Kit is not intended as a method or set of instruments for assessing individual staff performance. For other forms and methods of evaluation, see the ANZECC Best Practice report and our full project report to the STCRC which is available from the STCRC’s online bookshop at www.crctourism.com.au/bookshop.

What the Tool Kit is and is not designed to do…

YES

Provide results to help me identify which outcomes I’m achieving and not achieving with my interpretation

9

Help me determine the extent to which interpretation benefits my organisation and helps to advance my management goals

9

Cause me to reflect on what interpretation at my site is meant to accomplish

9

Provide a rapid-response measure of ‘how well’ or ‘how much’ is being achieved through interpretation at my site

9

NO

Help me evaluate non-interpretive visitor services (such as staff courtesy, food and beverage quality, etc.)

9

Provide ‘back-home’ measures for determining longer-term impacts of interpretation on visitors

9

Tell me why interpretation is or is not achieving the outcomes my organisation wants

9

Allow me to evaluate a specific interpretive product or program

9

Allow me to evaluate face-to-face (guided) interpretation

9

Allow me to evaluate non-personal (self-guided) interpretation

9

Allow me to change the questionnaire

9

Allow me to monitor trends in accomplishment over time Allow me to assess the performance of individual staff

9 9

The Benefits and Costs of Using this Tool Kit Other than staff time, it should cost you very little to implement the methods outlined here, at the very most perhaps $100 for printing, photocopying and field expenses. The benefit (a rapid-response evaluation) is significant in relation to such minimal cost. 8

INTERPRETATION EVALUATION TOOL KIT

SECTION 2: WHAT’S IN YOUR INTERPRETATION EVALUATION TOOL KIT The Tool Kit consists of two items: 1.

This Interpretation Evaluation How-To Manual

2.

The Tool Kit CD – which includes all the files you need to conduct valid and useful evaluations of your face-to-face interpretation programs and activities. These are described next.

What’s on the CD The Tool Kit CD contains three main folders: 1. Adobe 2. Evaluation Tool Kit 3. Examples

ATTENTION MAC USERS! It’s best if you copy and paste the folders to your hard drive and open them there (rather than from the CD).

Adobe Contains the latest version of Adobe Acrobat Reader. This application allows you to view the .pdf questionnaires.

Evaluation Tool Kit Contains three sub folders, containing the tools to conduct your interpretive program evaluation: •

Food and Beverage Evaluation Tools o Food & Beverage Questionnaire.pdf o Visitor Observation Form.pdf



Heritage Evaluation Tools o Heritage Questionnaire.pdf o Visitor Observation Form.pdf



Nature Evaluation Tools o Nature Questionnaire.pdf o Visitor Observation Form.pdf

Examples Contains completed example questionnaires. Installers The CD also contains the two installers for the Mac OS X and Windows platforms. The installers are used to set up the Interpretation Evaluation Tool Kit application on your computer: • Mac OS X – interpretationkit_macos_1_0.sit • Windows – interpretationkit_windows_1_0.exe 9

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

System Requirements Windows users: •

Disk space: Minimum 50.55 MB

Mac OS X users: • •

Disk space: Minimum 12.01 MB Java 1.4.2 or higher, however versions 1.3.x and 1.4.1 are also supported.

Mac OS X and Java The Mac OS X operating system has the Java platform pre-installed but you require Mac OS X 10.3 (Panther) to support Java 1.4.2. If you have an earlier version of Mac OS X, the Java version installed on your PC may be out of date, in which case you should follow the steps below to upgrade to at least version 1.4.1. To determine the version of Java your Mac is running, follow these steps: 1. Go to the Applications | Utilities folder and double-click on Terminal. 2. Type java –version If the build number is less than 1.4.1, you will need to install a later version of Java. If you have a Mac OS X 10.2.6 or later, download the Java 1.4.1 Update 1 from: http://www.apple.com/downloads/macosx/apple/java141update1formacosx.html If you have Mac OS X 10.3.4 or later, download Java 1.4.2 from: http://www.apple.com/downloads/macosx/apple/javaupdate142.html

Installation Instructions Mac OS X 1. Copy the interpretationkit_macos_1_0.sit file from the CD to your hard drive. 2. Unzip interpretationkit.macos_1_0.sit 3. Run the installer by clicking on Interpretation Kit Installer. 4. Follow the installation instructions. To run the application, go to the directory to where the Interpretation Evaluation Kit was installed (by default, this is the Applications directory), and double-click on the Interpretation Evaluation Kit icon. Windows 1. Copy the interpretationkit_windows_1_0.exe file from the CD to your hard drive. 2. Run interpretationkit_windows_1_0.exe executable. 3. Follow the installation instructions.

10

INTERPRETATION EVALUATION TOOL KIT

To run the application, double-click on the Interpretation Evaluation Tool Kit shortcut on the desktop (if the option to add a desktop shortcut was selected during installation), or from the Start Menu, go to Start | Programs | Sustainable Tourism CRC | Interpretation Evaluation Tool Kit.

Get Prepared Before You Start The Evaluation Tool Kit folder contains three versions of the visitor questionnaire, plus the visitor observation form, each designed for use in one of the three types of tourism operations: food and beverage interpretation settings, heritage interpretation settings, and nature interpretation settings. Decide which one is primarily for you. If more than one is relevant for your setting or operation, you may want to conduct separate evaluations using the tools appropriate for each focus (food & beverage, heritage and/or nature). Everything you need to conduct an on-site evaluation of your interpretive program is here in this manual and on your Tool Kit CD. Access to a photocopier and adequate time to print and make copies of the visitor questionnaires are the only other requirements. Unless you are going to hand out the questionnaires yourself, it will be a good idea to ensure that the staff collecting the data understand what is required of them. You will need to print the questionnaire directly from the CD. You will need to have Adobe Reader on your computer to open the files. After the questionnaires have been completed, you will also need a computer with a CD drive. Finally, the tables and charts are best printed from a good quality colour printer (for example, a laser or inkjet), although they look just fine on most other printers, and are also perfectly legible in black and white. Note: If you opt to print pie charts instead of bar charts, you’ll need a colour printer since the pie segments are colour-coded. Here are some points to keep in mind when preparing for your evaluation: •

You will find suggestions and requirements for where and when to collect data (i.e. administer the questionnaires and observe visitors) in Sections 3 and 4 of this manual.



The Interpretation Evaluation Tool Kit will enable you to evaluate your interpretation rigorously and yet cost-effectively. It assumes that you have little or no background in social science research or in conducting evaluations. However, it also assumes that you will follow the procedures as they are outlined here. Any changes to the methods or instruments will compromise the validity and reliability of the findings.



The Interpretation Evaluation Tool Kit has been designed to measure interpretation outcomes identified as important by tourism operators (at all levels of management). The ‘indicators’ were developed and pre-tested on a range of tourism operations and then refined for this Tool Kit. If you wanted to evaluate your interpretation against different outcomes, you would need to design different methods and instruments to what we have provided in this Tool Kit.

11

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

FAQs About the Tool Kit Will the Tool Kit help me identify what’s wrong with our interpretation? Yes. Over time, results of repeated evaluations will give you valid and reliable feedback on the strengths and weaknesses of your interpretive program with respect to the specific indicators being measured. Will the Tool Kit help me identify what’s wrong with a particular tour or presentation and how to fix it? Yes and no. The results of repeated evaluations will help you identify weaknesses, but you will need to use your own experience and knowledge of your operations to identify the causes and best remedies. Will the Tool Kit tell me what we are doing well? Yes and no. The results of repeated evaluations will help you identify the strengths with respect to the indicators being measured, but they will not identify the causes of your success. Can I use the Tool Kit to demonstrate how interpretation is benefiting or not benefiting the business? Yes. A number of indicators in the questionnaire pertain to business-relevant outcomes such as word-of-mouth advertising, merchandise sales and extending length-of-stay. Is the Tool Kit suitable for extended tours, say a tour that goes for a couple of days or more? Yes, and you always have the option of adding other questions, perhaps conducting personal interviews to get in-depth responses from visitors who, having spent a few days with you, may be willing to spend more than 5 minutes on an evaluation.

12

INTERPRETATION EVALUATION TOOL KIT

SECTION 3: HOW TO CONDUCT THE VISITOR SURVEY

Indicators A to J are measured using one of the three Visitor Questionnaires contained on the CD in the Evaluation Tool Kit Folder. Conducting your visitor survey involves just six steps. These include getting prepared (e.g., printing questionnaires and getting staff ready), handing out the questionnaires to visitors, retrieving the completed questionnaires, entering data from the questionnaires into the appropriate Evaluation Template, viewing the results and thinking about what they tell you. Here’s a quick look at the entire visitor survey process:

1 – Prepare to conduct your visitor survey

2 – Give the questionnaires to visitors for completion

3 – Retrieve completed questionnaires from visitors

4 – Enter the data from the questionnaires into the template

5 – Look at your results

6 – Interpret your results

13

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

1 – Prepare to conduct your visitor survey Before you use this Tool Kit, be sure that all interpretive staff know that you are undertaking an evaluation and why. Be especially sure they understand that this is not a performance appraisal, that results will be used in-house and data that may identify individuals will be treated confidentially. Let staff know what you are trying to do and why: to assess the outcomes of your interpretive program so that all of you can work together to improve interpretation’s contributions to your organisational goals. Interpretive staff should also be briefed on the potential benefits of the evaluation in terms of improving the status of their work in the organisation.

How to print the questionnaire As mentioned earlier, there are three versions of Tools, and you need to select the one that best fits your organisation (Food & Beverage, Heritage or Nature). Then, in the Evaluation Tool Kit folder, open the subfolder that best applies to you:

When you open the subfolder, you will find the Visitor Questionnaire. Double-click on the questionnaire icon and when it is open, immediately save it with a new file name to your own computer (you cannot save to the CD). For example, you might want to rename it something like InterpEvalNov2004, and then use ‘Save As’ in the ‘File’ menu to save it to your computer. This is illustrated below. Open the questionnaire file by double-clicking on the icon:

Use ‘Save a Copy’ or ‘Save As’ to rename the file and determine its saving location on your computer:

14

INTERPRETATION EVALUATION TOOL KIT

Remember, if you have trouble opening the PDF file, install Adobe Reader which you’ll find in the Adobe folder on the Tool Kit CD. You can now print the questionnaire either from the file you just saved to your computer or directly from the CD.

TIP!

You might consider photocopying the questionnaire on different coloured paper each time you use it, so you can easily keep the groups of responses separate.

Preparing the people who will conduct the visitor survey Unless you will be collecting all the data yourself, selecting appropriate staff to do it for you is critically important. You need people who are committed to the organisation and who can be professional, but who are also friendly and out-going. They need to be willing to take the process seriously and they need to be able to follow instructions to the letter. In undertaking either data collection or data analysis, staff should be aware of the ethical responsibility they have to both the organisation and to their colleagues to treat the findings as strictly confidential unless authorised to do otherwise. As we discuss in detail shortly, you will need to orient and prepare field staff to follow important procedures, including selecting, approaching and requesting visitors to participate in the evaluation. You will also need to establish a system for keeping careful records of the decisions you make regarding sampling dates/days/times, locations, number of data collectors, number of visitors to be approached, and afterwards record what differed on the day from what was planned. In addition to the questionnaires, you need to assemble other field materials such as name tags for your data collectors, clipboards and pens. How your data collectors dress can be important – usually you want the data collectors to dress smart casual, so they look professional but not intimidating, and so that they look like they are taking the job seriously. Unless it is impractical, encourage data collectors to wear everyday clothing instead of their company uniform or badge. This will reduce any pressure the visitors might feel to be ‘nice’ in their responses, a type of bias known as the ‘halo effect.’ Finally, ensure that you have looked after the safety, health and comfort of your data collectors, e.g. consider provision of emergency contact details, mobile phones, sunscreen, hats, and insect repellent.

15

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

FAQs About the Visitor Questionnaire Can I delete some questions in the questionnaire? We strongly recommend against this, as each instrument has been designed, piloted, tested and validated to measure a series of indicators that rely on the combination and sequence of questions now included in the questionnaire. If any particular indicator isn’t relevant to your business (e.g. the Indigenous indicator), simply ignore those responses when you type in your data or discard the actual tables and charts for this indicator and its sub-indicators. Can I change the wording of the questions or add new questions to suit our business? Again, we recommend against this, as we cannot vouch for the validity and reliability of any findings you produce if the instrument is altered in any way. Can I add a section with some socio-demographic questions? This should be OK. But we strongly recommend that you add them at the end of the standard questionnaire, NOT at the beginning. Be very careful, however, that the additional questions do not add significantly to the amount of time required to fill out the questionnaire. You might want to pattern the format of these questions after existing visitor studies that you want to compare with. Can the questionnaires be translated into a foreign language and used? We cannot guarantee that the indicators we have developed will be reliable measures in any language other than English. But if you are willing to accept this limitation, then a questionnaire translated by a native speaker might give you a reasonable snapshot of how non-English speaking visitors evaluate your interpretive program.

16

INTERPRETATION EVALUATION TOOL KIT

2 – Give the questionnaires to visitors for completion Now that you are prepared to conduct your visitor survey, you will need to make some practical decisions about how to administer the questionnaire to visitors. We suggest you give your data collectors the following checklist to complete as they prepare themselves to collect the data. Before you set out, complete this checklist

Tick here 9

Know and familiarise yourself with the particular site for data collection Dress appropriately, i.e. smart casual, do not wear a staff uniform, do not wear a staff name tag Make sure you have enough blank questionnaires and ensure they are identifiable such as by colour or by today’s date Assemble other field materials needed such as a table, clipboards and pens Bring a mobile phone and the contact details of your supervisor in case something unexpected occurs Consider your location and the weather and prepare accordingly, e.g. a hat, sun-block, sunglasses, a rain jacket, umbrella, and insect repellent Know how many visitors you need to approach and how many completed questionnaires you need to collect in this data collection period Know what method you will use for selecting visitors in an unbiased way Know how to approach visitors in a friendly and ethical way and practice what you will say Read and know what to do when a visitor you do not approach volunteers to complete a questionnaire Read and know what to say if a visitor wants to respond as a couple, or gives the questionnaire to a child to complete, or asks you to write her or his answers for you Read and know what to do if a visitor you approach struggles to understand your English Read and prepare yourself for what to say when a visitor refuses (chooses not to participate) Have a system for recording each time a person refuses to participate and the reason why

17

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Deciding when and where to approach visitors This is an on-site (or on-tour) survey that uses personally administered questionnaires; that is, the questionnaires are given to and collected from visitors by you or other staff. The questionnaires themselves are self-completed, that is, completed by the visitor without your help. This should take no longer than 5 minutes in most cases. The data collector does not normally complete the questionnaire on behalf of the respondent, although this would be okay if asked by the respondent to do so. Every questionnaire should ideally be completed by an individual visitor and not a group of visitors. This Tool Kit is designed to evaluate the overall interpretive experience of your visitors, so you must survey them after they have completed or almost completed their visit. On the other hand, it is important to get responses when visitors have something to say and the energy to say it. Either way, it is important that you try to catch visitors before they feel very tired or in a rush to leave. When determining where and when you will administer the questionnaires, take into consideration the comfort of your visitors as well as your data collectors – find a location that is out of the hot sun, wind, and rain, and ideally that is reasonably well lit and where there is a place to sit. Choose a place that is free of distractions ( e.g. not too noisy).

How often should I give out questionnaires? Whether you can administer the questionnaires at one point in time or whether you need to do it several times will depend on your visitors – for example, do they change seasonally? Another consideration is whether you want to evaluate your interpretation in relation to the introduction of a new product or an interpretive training program. If you plan to do repeated evaluations and compare them, you need to be consistent in every way you possibly can, e.g. the time of week and day, the location of the survey, and the person(s) you use to administer the questionnaires, so that differences are truly the result of your interpretation and not some other factors.

How many completed questionnaires do I need? Generally you want the greatest number of completed questionnaires that your time and budget will allow. If you can include all your visitors, then do so. This is called a census. But if you have too many visitors to get to all of them, then you will need to give the questionnaire to a sample of them. The main factor determining whether you should attempt a census of your visitors or draw a sample is their numbers. It is best to get a census if you have the time and resources to do it. A census gives you confidence in knowing what all your visitors think. A sample means you have to assume that the visitors you selected are representative of all the rest. As a general rule, we suggest the following:

18

INTERPRETATION EVALUATION TOOL KIT

Number of visitors on a given day

Census or sample?

50 or fewer

Census

More than 50

Sample

But remember, getting a census is always preferred, even with larger numbers.

If you need to select a sample If you decide to select a sample of visitors, get as many as you can within the constraints of time and available staff. As a general rule, try to get at least 50 completed questionnaires on a given day, but more is better if you can get them. Greater numbers generally mean you can be more confident that the responses accurately reflect what all your visitors think. If you want to be able to report on what specific sub-groups of visitors think, then you need to have enough completed questionnaires from those sub-groups to confidently do this. So, for example, if you were to collect 200 questionnaires but only 20 of them were overseas visitors, you may not be able to tell a story about overseas visitors with these results, unless of course there were only a total of 20 overseas visitors who visited your site or took your tour. If you want to see whether the experience of visitors at peak times is different to the experience at non-peak times, then again you will need enough completed questionnaires from both sets of visitors. It is good to aim for somewhere between 50 and 100 completed questionnaires for each sub-group or type of visitor.

Deciding who to approach It is worth taking some time to think about who you want to complete the survey, as this will affect when and where you ‘administer’ the questionnaires. Here are some questions you might ask yourself: • What story will you want to tell with the results? About which visitors? • Do you want to find out if the experience of different kinds of visitors (e.g. visitors from overseas, or visitors who come as part of an organised group) differs? If you want to exclude particular visitors (e.g. non-English speaking visitors or tour group members), you may need to give your data collectors one or more ‘filter’ questions to ask, or you may simply give your data collectors rules for what to do with such respondents. • Do you have any reason to wonder whether the visitor experience is different at different times of the year, or at different times of the week? For example, do you want to compare the experiences visitors have during peak periods to non-peak periods? Do you want to find out if visitors who come on weekends have a different experience than those who come during the week?

19

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Minimising bias The most important thing to consider when deciding who should complete the survey is that every visitor during the sampling period should have an equal chance of being selected by the data collector. If you follow a few simple rules, you can ask a sample of your visitors to fill out a questionnaire, and, as long as most of them say ‘yes’, you can be confident that what they tell you as a group will be fairly indicative of what all your visitors would tell you if you talked to them. This method for choosing such an unbiased sample of visitors is called probability sampling or random sampling. Remember, though, that if you can get a census of your visitors (i.e. give the questionnaire to everyone on a given day) then you don’t need these rules. 1.

The most important thing is to have a rule for selecting respondents so that all visitors have an equal chance to participate. For example, you do not want your data collectors to rely on people who volunteer to complete the questionnaire (e.g. pick a copy up from off a table), as these people often have stronger opinions (either positive or negative) than visitors overall. Nor do you want your data collectors to approach people based on how they look (e.g. she looks like a nice person who would say ‘yes’), or their capacity to participate (e.g. I think this person is Australian and English-speaking so I’ll ask them or they don’t have children so it will be easier for them to participate). A good way to avoid this happening is to identify an imaginary point or line and to give the questionnaire to the first person to cross the line after the previous questionnaire has been filled out.

2.

Decide on whose experience you want to report on – a cross-section of your visitors as a whole, peak vs. non-peak visitors, etc. If you have no reason to think that the season or day of the week affects their experience, then it really doesn’t matter when you administer your questionnaires.

3.

If it is important to compare visitors who come at different times of the week or year, then you need to lay out a calendar (whether that is seasonally or weekly) and select data collection days from each of the time periods (e.g. peak and non-peak; weekday and weekend). Ideally you would do this using some kind of probability method (e.g. drawing the day of the week out of a hat).

4.

Once you have decided whose experience you are trying to evaluate, you need to then come up with a ‘sampling plan’ for having data collectors on your tour or site approaching visitors.

How to approach and interact with visitors It is worth giving this section of the manual (and the FAQs that follow) to your data collectors to read, and then discussing it with them. Once you have identified an individual, you need to approach and greet them, tell them who you are, and briefly explain the purpose of the survey, the importance of getting their feedback and how the results will be used. This will help increase the number of visitors who agree to participate. While doing your best to get a visitor to 20

INTERPRETATION EVALUATION TOOL KIT

agree to complete the questionnaire, you do need to give them the opportunity to say ‘no’ and to accept this graciously and record this as a ‘refusal’. It is important that they know that you are asking them as an individual (not the whole family or group or their partner or any other travelling companion) to complete the questionnaire. Tell them that have been selected and their responses are important. Visitors also need to know that their responses are anonymous and that you don’t want or need their name on the questionnaire. The data collector needs to be ethical and considerate. The visitor is on holiday and this needs to be respected and the quality of their experience preserved. The visitor has the right to say no without feeling guilty or bad, or to complete only some of the questionnaire. And finally, each visitor is entitled to privacy and confidentiality (for example, the data collector should not be looking over their shoulder to see how they are responding).

Interpretation Evaluation Scenarios for Selecting Visitors Scenario A

A tourism operator wants to test out a new interpretive service or product.

This is a great use of the Evaluation Tool Kit. The ideal thing to do would be to administer the questionnaires to a group of visitors prior to introducing the new product, as a benchmark or ‘control group’ with which to compare the evaluation of the new product. The important thing is to be consistent in every way you possibly can, e.g. the time of week and day, the person(s) you use to administer the questionnaires, the type and size of group, so that differences are truly the result of your interpretation and not some other factors. Scenario B

A tourism operator wants to evaluate its overall interpretation program.

This is what the Evaluation Tool Kit is mainly designed for, and the key thing here is get questionnaires from a range of visitors on a range of days, so that the results are not the result of a one-off event, bad weather, or an unusual group of visitors. Any anomalies should then cancel each other out, and the results should give you a good picture of what most visitors take away from the overall experience.

21

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

FAQs About Collecting Data Can I use volunteers to collect the data? Yes. Just be sure to train them in how to do it, following the instructions in the previous section. Can I print the questionnaires double-sided? Can I reduce the font or reduce the whole thing on the photocopier to save paper? Neither of these is recommended. Respondents sometimes fail to see the questions on backs of pages (resulting in missed data), and the questionnaire has been carefully designed to make it easy to read even for people who are visually challenged. Why can’t I just use the coded version of the questionnaire in Appendix C when I photocopy them for visitors to fill out? You do NOT want visitors to see the code numbers on each question since these can bias results. Make sure you photocopy the questionnaire master from the CD and not the coded version in Appendix C when you are preparing to collect real data from visitors. How many is enough? Can I do too many? You cannot do too many. You should do as many as you can afford to do, and if you can get everyone (a census), then do it. If you have to select a sample, you should normally take into account how diverse your visitors are (the more diverse, the greater the number of respondents needed). Smaller ‘populations’ (total numbers of visitors) tend to require a higher proportion in the sample – for example, if you have only 50 visitors on a particular day or tour, you should sample at least half of them, and ideally all of them. Remember that in the end, you want to be able to say that your sample was representative of all visitors. Can visitors complete the questionnaire as a couple or as a family? This should be discouraged but you may use your judgment as to whether the questionnaire reflects the views of the person you approached. What if a parent gives it to a child to complete? This should be discouraged but you may use your judgment as to whether the questionnaire reflects the views of the person you approached. If a very young child you certainly should later discard it. What if someone I approach is struggling to understand my English? You need to assume that such a person will also struggle with written English, and is therefore ineligible as a respondent. However, it may be more polite to allow them to complete the questionnaire and then discard it later. 22

INTERPRETATION EVALUATION TOOL KIT

What if someone says they’d like me to read them the questions and circle the responses for them? (e.g. they forgot their glasses) This should be fine. It is best in this case to read them the response categories, so they can visualise the endpoints. If they can’t see them for some reason, read aloud exactly what is written at each endpoint (pole) of each response scale. Don’t be tempted to prompt or use any words that might bias the respondent (lead them to respond in a particular way). What if someone approaches me and asks if they can complete the questionnaire? It is good PR and polite to allow them to complete the questionnaire, but you will need to discard it later. Volunteers often have an ‘agenda’ that is rarely representative of other visitors. What do I do when someone says ‘no’? This is called a refusal. If someone refuses to complete the questionnaire, just thank them for their time and move on to the next selected person. It is a good idea to record how many visitors refuse and why. If you are getting lots of refusals, you may want to re-examine how you are approaching visitors.

3 – Retrieve completed questionnaires from visitors Whether you get back a completed or half-completed questionnaire or none at all, be sure to thank the visitor for his or her time. All visitors should leave feeling that their effort was valued and useful.

4 – Enter the data from the questionnaires into the template Data preparation instructions Once you have collected the questionnaires, you need to go through them and set aside any where there are many unanswered questions. We suggest, as a guide, that if more than half of the items in either question 1 or question 2 (or both) are unanswered, that you discard the questionnaire. If the respondent has entirely ignored or missed any one of the three pages of questions, you should also discard the questionnaire. Including partial responses like these reduces the validity of the data. You should set up a good system for keeping track of the date (or dates) when a particular set of questionnaires was administered and your sample size (how many visitors your data collectors approached), how many responded (i.e. subtract the number that refused) and how many were discarded (subtract these as well). The questionnaires that you are keeping are your usable responses. You now need to number each questionnaire, making sure that each has a unique ID number written on the front, as the data from each questionnaire will be entered in a row in the Data Entry view of your Evaluation Template starting from Person 1. 23

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Coding and entering your data In Appendix C you will find a coded version of each of the three Visitor Questionnaires (Food & Beverage, Heritage, and Nature). Each of these questionnaires shows you how to code each visitor’s responses to each of the questions. Take a moment now to find the questionnaire you are using. Note the red numbers above the possible responses to each question. These are the codes for that question. In entering the data from a completed questionnaire, all you need to do is note the number that corresponds to how that visitor replied to that question and enter it into the appropriate place in the data file, as explained in the section ‘How to enter your data into your Evaluation Template’. You will note in Questions 1 and 2 that some of the items have codes that range left to right from 1 to 7, while others are in reverse order (i.e. they range from 7 to 1). This is so that the ‘high’ or ‘good’ end isn’t always on the left or always on the right. Doing this helps to ensure that respondents give concerted thought to each question, rather than ticking things off in a ‘patterned response set’. When entering your data, you should be careful to type in the number that actually corresponds to the visitor’s response to that question, noting that the set of numbers is often going to be different than for the previous question. To save time and effort (and to prevent errors), you might want to photocopy the coded questionnaire in Appendix C onto an overhead transparency. Then simply lay the corresponding transparency on each page of the visitor’s completed questionnaire and enter the number that corresponds to each response. NOTE: If you want to be able to separate and make comparisons between different sets (groups) of responses (see Scenario A in the box below), you will need to create a new file for each new set of questionnaires – i.e. you will need to open the Evaluation Template and save it with a unique filename. If you want to get a single macro view of the outcomes of your interpretation (see Scenario B), then you can enter all response sets into one Evaluation Template.

Interpretation Evaluation Scenarios for Data Entry Decisions Scenario A

A tourism operator wants to know whether the responses from visitors changes after implementing a new interpretation program.

The ‘before’ and ‘after’ samples of responses need to be entered into two separate Evaluation Templates. Scenario B

A tourism operator wants to evaluate the outcome of its interpretation program for the entire season.

The questionnaires are administered to several different groups of visitors and all responses are entered into a single Evaluation Template.

24

INTERPRETATION EVALUATION TOOL KIT

FAQs About the Completed Questionnaires Why can’t I just use the coded version of the questionnaire in Appendix C when I photocopy them for visitors to fill out? You do NOT want visitors to see the code numbers on each question since these can bias results. Make sure you photocopy the questionnaire master from the CD and not the coded version in Appendix C when you are preparing to collect real data from visitors. What if a lot of the questions are unanswered? Make sure you have enough responses on any one questionnaire (at least 50% for each of the items under question 1 and question 2). Otherwise the questionnaire should be discarded. What if the same box is ticked for every question? That’s okay – we’ve taken care of response bias in the way we’ve designed and laid out the questionnaire and the wording of the individual questions. You can assume that respondents did indeed read every question. Do I need to keep the questionnaires after the data are typed into the Evaluation Template? Yes, store them safely and securely as you may want to use them again for another purpose. You may also need to prove to your boss or to someone else that the visitors actually were surveyed!

If you are a less-experienced computer user, you may find it useful to refer to the Computer Jargon box when reading this section. Any word or phrase that is underscored and in red font is explained in the Computer Jargon box.

25

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Computer jargon •

Cell – these are the boxes in the data file that you type a number in. The internal parts of a table such as the Observation Form also are called cells.



Icon – these are the little images you see on your desktop that represent a particular folder, file or program – they are what you double-click on when you want to open something.



Menu or drop-down menu – these are the options you get under each word on your tool bar. For example, if you click and hold your mouse on ‘File’, you get options such as ‘New’, ‘Open’, ‘Close’, ‘Save’, etc.



Tab – this refers to the two labels at the top of the Evaluation Template.



Toggle – this refers to moving back and forth between views (screens) – e.g. if you click on Data Entry and then click on Tables & Charts you are toggling between these two views.



Tool bar – this is the row of words at the top of your screen. Reading from left to right, your tool bar says ‘File’, ‘Help’.

Items in inverted commas (‘) in this How-To Manual refer to the menus or the dropdown menu options on your screen (see what we mean by tool bar and menu in the previous Computer Jargon list).

26

INTERPRETATION EVALUATION TOOL KIT

How to enter your data into the Evaluation Template Introduction The top section of the application contains the toolbar, which allows you to: o

start a new data file

o o

open an existing data file close the current data entry screen

o

save data to a file

o

exit the application

o

view information about the application.

Refer to the ‘Toolbar Menu’ section (following) for more information about the menu functions. The main section contains three screens: o

Welcome – This is the screen you see when you first start the application. From here, you can open an existing file (by clicking the ‘Yes’ button) or start a new file (by clicking the ‘No’ button). Once an existing file has been loaded or a new file is started, the Welcome screen becomes disabled and is no longer accessible, unless the current data entry screen is closed by clicking the ‘Close’ button in the tool bar menu.

o

Data Entry – This screen, along with the Results Tables & Charts screen, is enabled and accessible once you’ve loaded an existing data file or started a new file. This screen displays a data entry table into which evaluation data can be entered. Refer to the ‘Entering Data’ section (following Toolbar Menu) for more information.

o

Results Tables & Charts – This screen, along with the Data Entry screen, is enabled and accessible once an existing data file has been loaded or a new file is started. This screen displays the evaluated data for each indicator and sub-indicator. The data is represented in table form as well as a bar graph and pie chart. Refer to ‘Results Tables & Charts screen’ section for more information.

27

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Toolbar Menu

New Clicking this button will allow you to start a new data file. After pressing New, you will be asked to select the setting most relevant to you, which will be one of: o Food and Beverage o Heritage Based o Nature Based After selecting a questionnaire, the ‘Data Entry’ screen will be displayed. Refer to the ‘Entering Data’ section for more information. Note: If you are in the middle of editing data when the New button is pressed, you may be asked to save the changes to the file you are currently editing or to a new file. Open Clicking this button will allow you to open an existing data file. After pressing Open, you’ll need to select a previously saved file. The file must be an xml file created by this application (XML data files have the ‘.xml’ extension). After selecting an existing file to load, the ‘Data Entry’ screen will be displayed. Refer to the ‘Entering Data’ section for more information. To look at an example data file and set of results: Go to the ‘examples’ directory on your CD and select any one of the example files. Note: If you are in the middle of editing data when the Open button is pressed, you may be asked to save the changes to the file you are currently editing or to a new file. Close Clicking this button will allow you to close the data currently being edited. The Data Entry and Results Tables & Charts screens will become disabled and inaccessible, and the Welcome screen will be displayed. Note: If you are in the middle of editing data when the Close button is pressed, you may be asked to save the changes to the file you are currently editing or to a new file. 28

INTERPRETATION EVALUATION TOOL KIT

Save Clicking this button will allow you to save data to a file. If you are editing an existing file, the changes will be saved to the file. If you are editing unsaved data, you will be prompted to create a new file (Save As). Save As Clicking this button will allow you to save data to a specific file. This may be useful if you are currently editing an existing file and would like to save the changes to a different file or to a new file, rather than the file from which the data were originally loaded. Exit Close the application. Note: If you are in the middle of editing data when the Exit button is pressed, you may be asked to save the changes to the file you are currently editing or to a new file. About Clicking this button will display information about the application.

29

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Entering Data

Data Entry screen

To begin entering data into the table, double-click on a cell within the table. For some questions, only integers between 1 and 7 can be entered, and for others only the integers 1 and 2 (corresponding to Yes/No questions) are valid. Enter 0 (zero) if the respondent has left the question blank, circled two responses or circled half way between two responses. If you enter an invalid value, an error will be displayed which will allow you to revert to the previous valid value or to change the value.

Invalid Input Error

30

INTERPRETATION EVALUATION TOOL KIT

FAQs About Entering the Data What if a question has been left blank by a respondent? Type in a zero. What if the respondent has circled two responses when they should have only circled one? (e.g. they have circles both 5 and 6) Type in a zero. What if the respondent has marked their responses as halfway between two responses (e.g. halfway between 3 and 4) Type in a zero. What do I do with handwritten comments on the questionnaires? Keep these! People who make the effort to comment in writing often feel quite strongly about something. Read them and have someone type them into a Word document. You may want to use some of them to back up your results in a written report. What do I type when I’m in the Tables & Charts view? Nothing! This is all done for you.

5 – Look at your results How to view your results on the screen To switch between the Data Entry and Results Tables & Charts screens, click on the corresponding tab. If you have not saved the data you are editing to a file, you will be prompted to do so before you can view the results.

Data Entry tab

31

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Results Tables & Charts tab

Use the menu on the left hand side of the screen to view results for each indicator and sub-indicator. To view the results for a particular indicator click on the corresponding node – the results will be displayed in the right hand side of the screen. To expand an indicator to view sub-indicators (if any), click on the plus sign next to the indicator node or double-click on the indicator node.

Results Tables & Charts screen

To switch between Bar Graph and Pie Chart representation of the data, click on the corresponding tab at the bottom of the screen (you may need to scroll down to the bottom of the screen).

Switching between Bar Graph and Pie Chart

How to print and save graphs, charts and data You can easily print and save graphs and charts, and once you’ve saved them you can easily import them into a document (such as a handout or report) or a PowerPoint slide. 32

INTERPRETATION EVALUATION TOOL KIT

Windows To print or save a graph or chart (as a .png image file), right-click on the image and select the ‘Print…’ or ‘Save as…’ option. Mac OS X To print or save a graph or chart (as a .png image file), hold down the Control key and click on the image (or right-click on the image, if you have a two button mouse). From the options menu, select the ‘Print…’ or ‘Save as…’ option.

Selecting the ‘Save as…’ option

How to copy graphs, charts and data into Word or PowerPoint You can easily copy your data and paste it into a document for printing or sending to someone by email. Windows and Mac OS X To copy the entered data click on the Data Entry tab. Select the row(s) you wish to copy by clicking on any cell in the row. Multiple rows can be selected by holding down the Shift key. Press and hold the Control key (Control on Mac, Ctrl on Windows), and press the ‘C’ key to copy the selected data. This data can then be pasted into Microsoft Word, Excel, Notepad or any other text editor for easy printing.

Data cleaning It is important to check for errors made after entering the data – even the most careful person makes errors when entering data. You will probably want to do your data checking on a hard (printed) copy rather than on the screen (see instructions for copying and printing data above). 33

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

You should look for errors in at least one of the following two ways: 1) check visually to see if any cells have been left blank, have characters instead of numbers or have numbers that are out of the range (e.g. larger than 7) 2) get another person to check some of your entries against the actual questionnaires (say every 10th questionnaire). A very efficient way of doing this is for 2 people to work together, one reading the questionnaire aloud and the other checking these visually against the Data Entry printout.

6 – Interpret your Results The results for each indicator include two parts. First is a table showing the categories of the overall indicator scores (e.g., 4-8, 9-12, etc.) and the percentage of visitors whose scores on that indicator fall into each category. Second is a bar chart which presents the same results in graphic form. Depending on your needs, either of these outputs will be more useful to you. The tables give a more precise numerical picture, but the bar charts allow a more intuitive view of how the responses varied among visitors.

An example from a nature interpretation site evaluation Following is an example of the results table and bar chart for Indicator A (which measures how much the interpretation impacted visitors’ appreciation of Indigenous people’s connection to nature). Notice that the table shows the precise percentage of visitors whose Indicator A scores fall into each of the categories. In this case, you can see that there is quite a lot of disagreement among visitors as to whether the interpretation impacted their appreciation of Indigenous people’s connection to nature. About 43% rated this item as ‘low’ or moderately low, and about 37% rated it as ‘high’ or moderately high. The bar chart shows that there is also a fairly even spread among the intermediate responses—a sure sign of disagreement. If an aim of the interpretive program is to strongly impact visitors’ appreciation of indigenous connections to nature, program supervisors might not be very satisfied with these results.

Interpreting the numbers In order to determine more precisely what these results mean (and how the interpretive program might be improved) you must first be able to interpret the exact meaning of the range of possible scores for Indicator A. In other words, why is the lowest possible score 4 and the highest possible score 28 for Indicator A? And why do other indicators sometimes have different low and high values? You may recall from the description of the indicators in Section 1 that Indicators A, B, C, D, I and J are actually made up of multiple sub-indicators (these are discussed further in Appendices A and B). In the Nature questionnaire, Indicator A consists of 4 sub-indicators measured by questions 1E, 1N, 1R and 1T. To get the overall Indicator A score for any given visitor, the computer simply adds up the visitor’s replies to each of these four questions. These are the scores shown in the left column of each results table. If you look at the Nature Questionnaire in Appendix C, you can see that visitors replied to each of the questions on a scale ranging from a low of 1 to a high of 7. 34

INTERPRETATION EVALUATION TOOL KIT

Therefore, the lowest possible summed score would be 1+1+1+1 = 4 (i.e. the visitor’s answer corresponded to a 1 on all four questions), and the highest possible summed score would be 7+7+7+7 = 28 (i.e. the visitor’s answer corresponded to a 7 on all four questions). This means that the value of Indicator A on the Nature questionnaire can range from a low of 4 (Did not impact my appreciation) to a high of 28 (Impacted my appreciation). We have labelled the extreme values in the table and bar chart ‘low’ and ‘high’ and put them into groupings simply to make the results easier to read. The intermediate values have no specific meaning other than that they indicate lower or higher impact relative to one another. As can be seen, the interpretive program at our example site is not currently performing very well with respect to Indicator A. Overall, large percentages of visitors do not feel that their appreciation of indigenous connections to nature has been impacted. If Indicator A were important to your interpretive program, you would be well advised to begin looking for reasons for this sub-par performance and perhaps take corrective action. A: Impact on appreciation of indigenous connections to nature Possible range of scores

Valid percent 30.0% 13.3% 6.7% 13.3% 20.0% 16.7%

4-8 (Low impact) 9-12 13-16 17-20 21-24 25-28 (High impact) Total

100.0%

100% 80% 60% 40% 20% 0% Low (4-8)

9-12

13-16

17-20

21-24

High (2528)

Generally speaking, you can interpret the results for any overall indicator in two ways. First is by looking at the relative percentages of visitors who responded toward the high end versus the low end (as we have done in the above illustration). When an indicator is measured with only one question (as are Indicators E, F, G and H), this is the only option available to you.

35

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

However, in the case of indicators that are comprised of multiple sub-indicators (these include Indicators A, B, C, D, I and J), you can gain more information by also looking at the results for the questions corresponding to each of the sub-indicators. In our example, results for the 4 sub-indicators are as follows: Impact on appreciation of values Indigenous people attach to the land Question 1 E Valid percent 16.0% 1 Did not impact 8.0% 2 8.0% 3 12.0% 4 12.0% 5 20.0% 6 24.0% 7 Did impact Total 100.0%

Impact on my appreciation of Indigenous views of the land Question 1 N Valid percent 12.0% 1 Did not impact 24.0% 2 0.0% 3 12.0% 4 12.0% 5 20.0% 6 20.0% 7 Did impact Total 100.0%

36

INTERPRETATION EVALUATION TOOL KIT

Impact on my appreciation of Indigenous views of wildlife Question 1 R Valid percent 12.0% 1 Did not impact 12.0% 2 4.0% 3 16.0% 4 12.0% 5 32.0% 6 12.0% 7 Did impact Total 100.0%

Impact on appreciation of the historic relationship Indigenous people have with the land Question 1 T Valid percent 12.0% 1 Did not impact 16.0% 2 0.0% 3 28.0% 4 8.0% 5 28.0% 6 8.0% 7 Did impact Total 100.0%

37

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

If you compare the results for the four sub-indicators, you can see that the poor performance on Indicator A is found mainly in two of them (questions 1E and 1N, which respectively have to do with interpretation’s impact on visitors’ appreciation of Indigenous people’s attachment to and views of the land). Ratings on questions 1R and 1T were generally higher. Thus a corrective action might be to motivate interpreters to include better coverage of content pertaining to questions 1E and 1N and to help them deliver stronger themes focused on the special ways in which Indigenous people view and feel attached to the land. Of course, if producing these kinds of impacts were not an important outcome of interpretation at your site, then you would probably not want to take any action based on the results for Indicator A.

FAQs About Interpreting the Results How do I compare these findings to previous findings or future ones? The easiest way is to lay printouts of your bar graphs side by side and visually compare them. How do I email the findings to other people? Since others may not have the Tool Kit application on their computer, the safest way of doing this is to follow the instructions in the previous section, LOOK AT YOUR RESULTS - ‘How to print and save graphs, charts and data’ and ‘How to copy graphs, charts and data into Word or PowerPoint’. What if performance is poor on many of the indicators? Please see Section 4 for what action you might need to take.

38

INTERPRETATION EVALUATION TOOL KIT

SECTION 4: TAKING ACTION BASED ON RESULTS OF THE VISITOR SURVEY

Now that you’ve completed the visitor survey and you know how to read and interpret the results, what conclusions can you draw about your interpretation? How successful is it? What aspects of your interpretation are working for you and what aspects are not as effective as they could be? Where indicators show you are already achieving at a high level of effectiveness, we hope that you will view them as affirmation of a job well done and will let the staff responsible know about these successes. In addition, these achievements should confirm in your mind the value that high quality interpretation brings to your organisation. You now have compelling evidence that interpretation serves an important role in your core business. However, where the indicators suggest that there is room for improvement, how do you decide what action to take? Everything you’ve done to this point has been to help you see what action, if any, is needed to make your interpretation even more successful than it already is. The following table provides a guide to pinpointing the shortcomings of your current interpretive program and to considering possible courses of action based on the results of your evaluation. The table lists the 10 overall indicators measured in the Visitor Survey and discusses probable reasons for obtaining a low score on each. In the right-hand column, possible corrective actions and the rationale behind them are presented. The purpose of the table is not to tell you what is wrong and how to fix it, but rather to give you a place to start. Aside from the guidance provided in the table, a key to your efforts to improve interpretation at your site or business will be your own experience and wisdom regarding your staff, the setting, and the kinds of outcomes most important to your success. Toward this end, we strongly advise you to share the evaluation results with your staff and to explore avenues for continued improvement together with them.

39

40

What a low score indicates

Interpretation is not impacting visitors’ current world view via empathy with the historic period and people who lived then. Probable reasons for poor performance on any of the sub-indicators include lack of content coverage and weak themes related to the content.

Determine the reasons for poor performance by identifying the sub-indicators with the lowest scores. Work with interpretive staff on strengthening their themes and conclusions. Encourage them to find ways to engage the audience in thought during their presentations, especially by posing questions that provoke thought.

Interpretation is failing to provoke visitors to thought. Probable reasons for poor performance on any of the sub-indicators include weak themes, predominantly one-way communication, an interpreter’s reluctance to ask questions and otherwise engage the audience in thought during the presentation, and weak conclusions.

Interpretation is failing to underscore the importance of heritage preservation. Probable reasons for poor performance on any of the sub-indicators include weak themes (heritage is not made to seem important and worthy of preservation) and content coverage (the concept of preservation or your role in it are not being introduced into the program).

Indicator*

A* (Heritage)

A* (Nature)

B*

C* (Heritage)

Analysing and Acting on the Results of Your Evaluation

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Work with interpretive staff on strengthening their themes so that they connect what is truly significant about the place in terms that speak loudly to the visitors. A low score on Indicator C will probably be accompanied by a low score on Indicator J (meaning and relevance). Encourage interpreters to find better ways to bridge the story of the place to what is already important in the lives of the visitors (e.g. through examples, analogies, and strong metaphors).

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Work with interpretive staff on strengthening their themes and conclusions. Encourage them to find ways to engage the audience in thought during their presentations, especially by posing questions that provoke thought.

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Coach interpretive staff to strengthen either the content associated with these sub-indicators, the themes they are developing around these content areas, or both.

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Coach interpretive staff to strengthen either the content associated with these sub-indicators, the themes they are developing around these content areas, or both.

Actions you might want to take as a result

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Work with interpretive staff on strengthening their themes so that they connect what is truly significant about the natural significance of the site in terms that speak loudly to the visitors. A low score on Indicator C will probably be accompanied by a low score on Indicator J (meaning and relevance). Encourage interpreters to find better ways to bridge the story of the place to what is already important in the lives of the visitors (e.g. through examples, analogies, and strong metaphors). Determine the reasons for poor performance by identifying the subindicators with the lowest scores. Work with your interpretive staff to strengthen their themes. Weak themes may be due to the fact that interpreters sometimes choose to focus their interpretation on something, or an aspect of the place, that simply isn’t that interesting or motivating to them. Encourage these interpreters to find their own passion about the place and to direct the content of their interpretation and the themes they develop around that content to connect more with what they, themselves, truly find important. Work on presentation mechanics that will enhance visitors’ enjoyment of their activities; look for better use of analogies and metaphors to connect personally to what visitors know and care about; and encourage them to simplify the organisation of their activities to make the content easier for visitors to follow.

Interpretation is failing to underscore the importance of nature conservation. Probable reasons for poor performance on any of the subindicators include weak themes (the natural values of the place are not made to seem important and worthy of protection) and content coverage (the concept of preservation or your role in it are not being introduced into the program).

Interpretation is failing to satisfy visitors’ expectations of enjoyment and stimulation. Probable reasons for poor performance on any of the sub-indicators include weak themes, boring presentation style, and lack of apparent enthusiasm.

Interpretation is failing to stimulate visitors to want See the recommended actions for Indicators B, D and J. to participate in additional interpretive activities. Reasons for poor performance on this single-item indicator may include poor performance on Indicators B (elaboration), D (lack of enjoyment), or J (lack of relevance).

C* (Nature)

D*

E

INTERPRETATION EVALUATION TOOL KIT

41

42

Interpretation is failing to make visitors want to See the recommended actions for Indicators B, D and J. stay longer. Obviously, many factors outside of the interpreter’s control influence whether visitors actually extend their stay at the site. But if the interpretation is high quality, the desire to stay longer would ideally result. Reasons for poor performance on this single-item indicator may include poor performance on Indicators B (elaboration), D (lack of enjoyment), or J (lack of relevance).

Interpretation is failing to make visitors want to See the recommended actions for Indicators B, D and J. return for a repeat visit. Obviously, many factors outside of the interpreter’s control influence whether visitors will actually return some day. But if the interpretation is high quality, the desire to return would ideally result. Reasons for poor performance on this single-item indicator may include poor performance on Indicators B (elaboration), D (lack of enjoyment), or J (lack of relevance). Low performance on this indicator will likely be matched with low performance on Indicator I (positive word-of-mouth advertising).

H

See also the recommended actions for Indicators B, D and J.

G

In accordance with what your organisation considers appropriate and consistent with the image it wants to project, you may want to encourage interpretive staff to make reference (subtle or outright) to the opportunity visitors have to purchase reminders of their visit in your gift shop or retail sales area. Depending on uniform requirements, interpretive staff may even want to wear or otherwise display items that most directly relate to the place or to the individual themes they develop in their presentations.

Interpretation is failing to stimulate in visitors a desire to buy a memento, souvenir or product that is directly related to the story of the site. Probable reasons for poor performance on this single-item indicator are that it does not occur to visitors that they can purchase a keepsake to remind them of the place. Other reasons might include poor performance on Indicators B (elaboration), D (lack of enjoyment), or J (lack of relevance).

F

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Interpretation is failing to make itself meaningful and relevant to visitors. Probable reasons for poor performance on any of the sub-indicators include weak themes; excessive use of technical vocabulary or passive sentence structure; not using pertinent examples to illustrate new concepts; lack of analogies to show similarities between familiar and unfamiliar ideas; lack of metaphors and similes that allow visitors to be comfortable with new concepts; too infrequent use of direct eye contact with visitors; avoidance of learning and/or using visitors’ names or the word ‘you’ in commentaries; or failure to connect the interpretation to things of symbolic or emotional significance to visitors.

J*

* Indicator is measured using multiple sub-indicators

Interpretation is failing to make visitors want to say positive things about your site (positive word-ofmouth advertising). Obviously, many factors outside of the interpreter’s control influence whether visitors say positive things and/or recommend your site to others. But if the interpretation is high quality, then ideally this will result in visitors wanting to talk positively about your site. Since this indicator comes closest to capturing visitors’ overall satisfaction with their experience, poor performance on this indicator is probably a result of poor performance on one or more of the other indicators, particularly Indicators B, D and J. Low performance on this indicator will also likely be matched with low performance on Indicator H (desire to return for a repeat visit).

I*

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. These sub-indicators capture the degree to which interpretation is connecting to what visitors already know and care about. Work with interpretive staff to strengthen their themes, connecting them more forcefully with things visitors care about, their values and concepts of symbolic importance to them. Listen to their interpretive commentaries, noting uses of unnecessary jargon, opportunities to build in sharper examples and analogies to make the information more meaningful to visitors. Encourage interpretive to explore uses of metaphors and similes, both in their themes and in the information content they present in support of their themes. Focus on improving presentation mechanics including increasing eye contact with visitors, use of personal words like ‘you’ and the visitors’ actual names.

Determine the reasons for poor performance by identifying the subindicators with the lowest scores. These sub-indicators capture visitors’ assessment of how interesting and enjoyable the place is, and whether it is worth the money and time to visit. You might want to examine your performance on overall indicators B, D and J and the sub-indicators that comprise them in particular, since any combination of these may help you identify more specific strategies for increasing positive word-of-mouth advertising as a result of your interpretive program.

INTERPRETATION EVALUATION TOOL KIT

43

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

SECTION 5: HOW TO CONDUCT THE OBSERVATIONS

The observation method allows you to use listening and observation to determine the extent to which visitors were provoked to interact with the presenter (interpreter, guide, etc.). In the suite of 11 indicators developed for this Tool Kit, this was the only indicator of interpretation success that could not be captured in a visitor survey. For this reason, the Tool Kit uses a separate Observation Form to help you assess this interaction. As with the visitor survey, you should follow a series of steps: 1. Prepare to conduct your observations

2. Observe, listen and tally results

3. Interpret your results

1 – Prepare to Conduct your Observations You need to let your interpretive staff know that you are undertaking an evaluation and why. As with the visitor survey, staff may need reassurance that this is not a performance appraisal and that individual observations will be pooled to get a sense of how effective the overall interpretive program is at provoking visitors to interact and engage with their interpreter. Let staff know that the outcomes will be used to enhance the quality and thus the status of face-to-face interpretation within your organisation. You will need to print one copy of the Observation Form for each interpretive activity (i.e. presentation, guided tour, etc.). As with the questionnaires, these are included on the CD only as PDF files, so you will need Adobe Reader in order to open and print the file (a copy is also contained in Appendix D). And as with the visitor survey, you may want to print on different colours of paper in order to distinguish between observations done on particular groups of visitors or during particular seasons. Deciding who will undertake the observations is important, as they will be observing individual guides or interpreters. It is best if the observations can be conducted unobtrusively, that is, in a way that neither the presenter nor the visitors know when a particular program is being observed, so that the interpretation is delivered as 44

INTERPRETATION EVALUATION TOOL KIT

normally and naturally as possible. This suggests that using someone who is not a current member of staff may be preferable to someone known to the presenter. Either way, observers need to be mature, professional people willing to take the process seriously and, perhaps most important, committed to maintaining confidentiality with respect to those they observe and the organisation generally. If you can afford it, and especially for large groups, use two independent observers, so that you can pool their observations. You will also want to make some careful decisions about when and where to observe visitors interacting with interpreters, as interaction can differ depending on the setting, the size of the group, the type(s) of visitors, the time of year, the time of day and so on. The best approach would be to do multiple observations that cover the range of experiences you deliver, recording the date, location, presenter, group size, weather, the name of the observer and other factors that might influence the results. On the other hand, if you want to make comparisons between programs, say between different types of presentations or different group sizes, be very careful about maintaining consistency in the other ‘variables’, for example, the season, the time of day, the location, the type of visitor, and the person doing the observations. Otherwise it will be very hard to know exactly why there was more interaction for one program compared to another. The observer should be provided with the observation forms, pencils and a ‘discrete’ clipboard, but not a uniform or name tag, as you want them to dress and look like just another group member rather than a researcher or member of staff. As with the visitor survey, be considerate of the safety and comfort of your observers and ensure they are not overly fatigued by having to do too many observations on a given day. Careful observations require energy and concentration. We suggest you give your observers the following checklist to complete prior to doing their observations.

45

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Before you set out, complete this observer’s checklist

Tick here 9

Know and familiarise yourself with the particular program and the setting where you will be observing Dress appropriately, i.e. smart casual, do not wear a staff uniform, do not wear a staff name tag Make sure you have a few observation forms and ensure they are identifiable such as by colour or by today’s date Make sure you have sharpened pencils and a clipboard Bring a mobile phone (but make sure it is turned off!) and the contact details of your supervisor in case something unexpected occurs Consider your location and the weather and prepare accordingly, e.g. a hat, sun-block, sunglasses, a rain jacket, umbrella, and insect repellent Familiarise yourself with the observation form and do at least one practice run on an interpretive program If in doubt, write down any rules you apply for allocating a particular interaction to a particular cell

If someone other than you conducts the observations, you will also want to ensure that the data are returned to you immediately and kept in a secure place, and that you or someone else debriefs with the observer to ensure that anything that may have influenced the results on the day has been captured in writing.

2 – Observe, Listen and Tally the Results If you are the observer, position yourself so that you appear to be a participant in the group, but somewhere where you can clearly see and hear all the other participants in the group, including the presenter. Each interaction you observe between the presenter and a visitor (not between visitors) will be recorded with a tally (mark) on the observation recording sheet as follows:

46

INTERPRETATION EVALUATION TOOL KIT

Who initiated the interaction? What was the response?

initiated by presenter

initiated by visitor

Verbal (used words)

A

C

Non-verbal or physical

B

D

First choose the column: • Was the interaction initiated by the presenter? OR • Was the interaction initiated by a visitor? For example, if the presenter asks the audience a question and there is a response from a person in the group, then depending on how that person responded you would record this in one or both of the cells (A, B) in the left-hand column (presenter-initiated). (If there was no response from the group, you would not record this as an interaction.) Then choose the row: • Was there a verbal response from the visitor? – in other words, did it involve a two-way exchange of words between the presenter and a visitor? (cells A or C) • Was there a non-verbal / physical response from the visitor? – did the visitor react bodily or audibly (e.g., a laugh or gasp) but without words? (B or D) OR • Was the interaction both verbal and physical? For instance, in the above example, if a visitor responds with a verbal answer to the presenter’s question, you would tally it like this: Who initiated the interaction? What was the response?

Verbal (used words)

initiated by presenter

initiated by visitor

I

Non-verbal or physical

47

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

If you see one visitor responding with a giggle, another by shrugging her shoulders, and a third by answering the interpreter’s question, you would make a tally mark for each visitor like this: Who initiated the interaction? What was the response?

initiated by presenter

Verbal (used words)

I

Non-verbal or physical

II

initiated by visitor

Similarly, if the presenter invites someone to physically participate in the presentation or guided activity (for example, by holding a prop) and a person responds, you would record this as presenter-initiated physical interaction (cell B), and if they also speak while participating, you would also put a tally mark in cell A. If the presenter passes around a prop, then you would record this as a presenter-initiated physical interaction each time a visitor ‘reacts’ to the prop. In some cases, it might be more appropriate to make a note under the table, for example, saying ‘presenter passed around a gold nugget and all visitors handled it’ (or ‘all except 2’ or whatever the case was). When a visitor initiates the interaction, you would make a tally mark in the second column. For example, if a member of the group asks a question, then you would record this in cell C, like this: Who initiated the interaction? What was the response?

Verbal (used words)

Non-verbal or physical

48

initiated by presenter

initiated by visitor

I

INTERPRETATION EVALUATION TOOL KIT

If a visitor picks something up and brings it to the presenter, this would be recorded in cell D. Who initiated the interaction? What was the response?

initiated by presenter

initiated by visitor

Verbal (used words)

I

Non-verbal or physical

If the visitor also makes a comment or asks a question when showing something to the presenter, then this interaction would be tallied as both a verbal (cell C) and a physical (cell D) interaction, like this: Who initiated the interaction? What was the response?

Verbal (used words)

Non-verbal or physical

initiated by presenter

initiated by visitor

I

I

If there are circumstances that you observe may be affecting the level or type of interaction, it is important to record these under the comments section at the bottom of the form. For example, if in a particular program one person in the group particularly dominates the interaction, you might want to note this, as it may cause both the presenter and other audience members to respond in a particular way. If you observe a particularly unruly group or suspect that a number of group members have been drinking, you should record that also. If the weather is particularly adverse and affecting interaction, note it down. Finally, if your observations tell you that the presenter is having an off-day, or perhaps suspects that they are being observed so is particularly nervous, record it. All such observations can only help the interpretation of your data.

49

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

FAQs About Observations What if I can’t hear or understand a presenter’s question or a visitor’s response? You only need to record whether the visitor or the presenter initiated the interaction (who asked who) and whether the visitor responded with words or with some other non-verbal or physical response. What if the visitor’s response to a presenter’s question is wrong? It doesn’t matter if it is right or wrong, brief or detailed, it should be tallied as a single verbal response. What if a visitor’s response is negative? Any verbal or non-verbal response, whether it is positive or negative or neutral, is tallied in the same way as any other response. How do I record ‘laughter’ or a ‘moan’ or ‘gasp’ as a response? If no words are spoken, this should be tallied as a non-verbal response. Can I add a section to collect other information such as the socio-demographic profile of the group? Yes, but be careful that this doesn’t distract you from the central focus of the evaluation. Observers should collect this information only before the program starts or after the program finishes, and concentrate on observing and listening for interactions during the activity itself. What if a visitor asks me what I am doing? It would be best to say that you are doing some research about interpretation, but not go into detail about what it is or why you are doing it.

To see the results, simply count up the number of tally marks in each cell. You can also tally by column to determine how many interactions were initiated by the presenter (cells A and B) compared to the visitor (cells C and D). And you can tally by row to determine how many responses were verbal (cells A and C) compared to non-verbal/physical (cells B and D).

50

INTERPRETATION EVALUATION TOOL KIT

3 – Interpret your Results What you see in your results and what story it tells will depend in part on what you need to know, how many presentations you observe, and over what period of time. When you look at the results from a single observation form, be careful not to assume or conclude too much, as there may be factors at play on that particular day that are atypical (the composition of the group may have been unusual; the interpreter may have been feeling unwell; the weather may have been appalling). If you have multiple observations of a particular interpretive program, you can be more confident that the collective results tell the real story of how much visitors are provoked to interact with their presenter. Multiple observations are also more reliable in comparing the type of interaction that tends to occur. For example, if most or all of the observation results are dominated by non-verbal responses, this may suggest that visitors generally do not feel comfortable or welcome in verbally interacting with their interpreter. If you find very few interactions initiated by presenters over a series of observations, this may suggest that your presenters could be doing more to encourage visitors to interact. The next section of the Manual discusses a range of findings and what actions are suggested by these.

51

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

SECTION 6: TAKING ACTION BASED ON RESULTS OF THE OBSERVATIONS

Now that you’ve completed your observations and you know how to read and interpret the results, what conclusions can you draw about this aspect of your interpretation? How successful is it? What aspects of your interpretation are provoking interaction and what aspects are not as effective as they could be? As with the visitor survey, where the results show you are already achieving interaction at a high level, we hope that you will view this as affirmation of a job well done and will let the staff responsible know about these successes. However, where the results suggest that there is room for improvement, how do you decide what action to take? The following table provides a guide to pinpointing the shortcomings of your current interpretive program with respect to provoking interaction, and to considering possible courses of action based on the results of your evaluation. The table lists the columns and rows in the Observation Form and discusses probable reasons for shortcomings in each. In the right-hand column, possible corrective actions and the rationale behind them are presented. The purpose of the table is not to tell you what is wrong and how to fix it, but rather to give you a place to start. What actions you take in response to any of the examples provided in the previous section really depend on your organisational goals for interpretation. Is it always important that your interpretation provoke visitors to interact with staff, or is this more important with certain kinds of visitors (say for example children) or certain interpretive presentations (e.g. on-site talks to small groups)? You may find that particular locations or particular programs are not creating the amount of interaction with visitors that you had hoped they would. The results from your observations will serve as a diagnostic for this aspect of your interpretation. Aside from the guidance provided in the table, a key to your efforts to improve interpretation at your site or business will be your own experience and wisdom regarding your staff, the setting, and the kinds of outcomes most important to your success. Toward this end, we strongly advise you to share the evaluation results with your staff and to explore avenues for continued improvement together.

52

Language / cultural barriers Group size Lack of comfort and/or familiarity with audience Inflexible itinerary – expectation to deliver a scripted commentary in pre-determined timeframe

Language / cultural barriers Group size and other group characteristics Formality of setting and/or style of presenter Presenter’s verbal and non-verbal cues Mismatch of content and depth of interpretation to the audience

Language / cultural barriers Group size and other group characteristics Formality of setting and/or style of presenter Presenter’s verbal and non-verbal cues Mismatch of content and depth of interpretation to the audience

Language / cultural barriers Group size and other group characteristics Formality of setting and/or style of presenter Presenter’s verbal and non-verbal cues Mismatch of content and depth of interpretation to the audience

Column 1 (not much initiation by presenter)

Column 2 (not much initiation by visitor)

Row 1 (not much verbal response from visitors)

Row 2 (not much non-verbal /physical response from visitors)

What a low number of tallies indicates

Analysing and Acting on the Results of Your Observations

Coach interpretive staff to actively encourage this with audiences by role modelling. If isolated to particular types of visitors or interpretive activities, discuss with staff, and coach interpretive staff to explore a range of techniques for provoking visitors to respond to interaction. Provide cross-cultural training for staff. Provide non-verbal communication training for staff and have staff practice being active listeners and facilitators.

Coach interpretive staff to actively encourage responses from audiences. If isolated to particular types of visitors or interpretive activities, discuss with staff, and coach interpretive staff to explore a range of techniques for provoking visitors to respond to interaction. Provide cross-cultural training for staff. Provide non-verbal communication training for staff and have staff practice being active listeners and facilitators.

Coach interpretive staff to actively encourage visitors to initiate interaction (such as asking questions). If isolated to particular types of visitors or interpretive activities, discuss with staff, and coach interpretive staff to explore a range of techniques for provoking visitors to initiate interaction. Provide cross-cultural training for staff. Provide non-verbal communication training for staff. Consider reducing group sizes. Minimise or eliminate use of scripts.

Provide interpretive staff with strategies for getting to know their audiences, including pre-presentation banter. Provide cross-cultural training for staff. If isolated to particular types of visitors or interpretive activities, discuss with staff, and coach interpretive staff to explore a range of techniques for initiating and encouraging interaction with these visitors. Encourage and reward efforts to try new techniques, or old ones in new settings or with different audiences. Consider reducing group sizes. Minimise or eliminate use of scripts.

Actions you might want to take as a result

INTERPRETATION EVALUATION TOOL KIT

53

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

SECTION 7: NEXT STEPS

The Interpretation Evaluation Tool Kit lends itself not only to multiple evaluations over time, but also to cross-site comparisons. The STCRC hopes to encourage this kind of comparison by creating a website where you and other operators can upload your results. We anticipate that this will be made possible at no charge to you, and that the data and results you upload will be anonymous. In this way, you’ll be able to benchmark the outcomes of your interpretation against other comparable operations without fear of embarrassment related to your weaknesses, or risk to competitive advantage by revealing too much about your strengths. For more information about how this Tool Kit was developed and for further reading about interpretation evaluation generally, please contact the authors for a copy of the full report, Development and Refinement of a Methodology and Evaluation Tools for Assessing Interpretation Outcomes. STCRC Technical Report.

54

INTERPRETATION EVALUATION TOOL KIT

APPENDIX A: HOW THE INDICATORS WERE SELECTED AND DEVELOPED Selection of the 11 indicators In selecting the indicators, we wanted to make sure that they reflected the types of outcomes that interpretation providers actually want from their interpretive programs (for example, enhanced visitor enjoyment, positive visitor attitudes about conservation, positive word-of-mouth advertising, provoking visitors to think about the values inherent at the site, etc.). We also wanted to make sure that they were theoretically valid based on what is known about interpretation’s potential impacts on how visitors think, feel, and possibly behave with respect to the things being interpreted for them. The 11 indicators emerged from the expressed priorities of a range of industry partners. We used a structured facilitation process at two sites where a wide variety of face-to-face interpretive programs are offered (Port Arthur in Tasmania and Sovereign Hill in Victoria) in order to learn from staff what they felt were the most important indicators of ‘successful’ or ‘effective’ interpretation at those sites. To ensure that the indicators selected would be relevant to a wide cross-section of industry needs, participants in these discussions included program managers, frontline interpreters and guides, sales and marketing staff, and mid- or executive-level administrators. A pool of 54 responses resulted from these sessions, including 24 at Sovereign Hill and 30 at Port Arthur. Of course, there were many overlaps and duplications, so we undertook a rigorous process of assigning the responses to categories and then used other researchers to check and verify our decisions. We then began a process of elimination, based on the aims of the Tool Kit. For example, several of the indicators were staff-focused (e.g. it will be good for staff morale or make the guide’s job easier) some were other-focused (i.e. the outcome would be good for another stakeholder group but was not relevant to what the organisation or business was trying to achieve with its interpretation), and several were inputs rather than outputs (e.g. a feature of interpretation design, infrastructure or delivery rather than an outcome of the interpretation). This resulted in a list of 26 indicators that were visitor responses to interpretation (e.g. visitors enjoyed or were impacted by the interpretation). Since the purpose of the Evaluation Tool Kit was to provide a practical method of evaluating visitor responses to interpretation, these 26 indicators were selected for further analysis. Collectively, they focused on three main categories of outcomes: cognitive outcomes: what visitors might think, know or believe as a result of interpretation (e.g. understanding something, having a new view, or being provoked to thought) affective outcomes:

what visitors might feel as a result of interpretation (e.g., appreciation of something, satisfaction with something, an attitude about something)

behavioural outcomes:

what visitors might do or be motivated to do as a result of interpretation (e.g. stay longer at the site, buy something, positive word-of-mouth advertising) A series of criteria was now applied, including: • whether a single indicator could capture the essence of multiple indicators (particularly between the two sites), i.e. were there multiple items measuring the same phenomenon,

55

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs



the applicability of each indicator beyond one particular site to other heritage sites and, as discussed in more detail below, to nature-based and food and beverage attractions, • the relative importance (ranking) of each indicator by the original participants, • whether the indicator made sense in terms of contemporary communication theory and research (i.e. could interpretation realistically be expected to achieve this outcome) and • whether the indicator could be measured simply yet reliably by a non-social scientist. Again, we used other researchers to independently apply our criteria and select indicators, and then compared their decisions with ours. Applying these criteria produced the final suite of 11 indicators which, in turn, became the focus for the development of the Evaluation Tool Kit. These 11 indicators are shown in the following table and discussed in more detail in Section 1. Procedures for measuring and collecting data on each indicator would be the next step in the Tool Kit development process. Indicator A B C D E

Indicator Long Title Impact on current world view via empathy with historic period & people / impact on appreciation of indigenous connections to nature Elaboration (provoked to thought) Positive attitude toward heritage / heritage preservation Positive global evaluation of interpretation at site

G

Desire to participate in additional interpretive activities Desire to purchase a memento or souvenir related directly to site story Desire to stay longer

H

Desire to return for repeat visit

I

Positive word-of-mouth advertising

J

Visitors found it relevant & meaningful to their lives

K

Visitors provoked to interact with the guide / interpreter (interactive experience)

F

Category Cognitive / affective Cognitive Affective Affective Affective / behavioural Affective / behavioural Affective / behavioural Affective / behavioural Affective / behavioural Cognitive Behavioural

Deciding on data collection methods for each indicator Each of the 11 indicators was independently rated by three social scientists according to the ease and precision with which it could be measured by: (1) a questionnaire, (2) a formal interview, (3) an informal interview (including focus groups), and (4) observation. We then assessed each method using widely agreed upon criteria including the method’s ability to produce valid and reliable information and the burden imposed on staff (i.e. expertise, time and other resources required) who would be using it to collect data for each indicator. These assessments resulted in a decision to use a visitor questionnaire to measure 10 of the 11 indicators (A to J), and participant observation to measure Indicator K.

56

INTERPRETATION EVALUATION TOOL KIT

Development and testing of the quantitative indicators Development of the Tool Kit’s quantitative indicators required two main steps. First was designing a visitor questionnaire for each of the three types of settings (food & beverage, heritage and nature sites). Second was collecting data from real visitors at multiple locations representing each site to make sure the information being produced was valid and reliable. To do this, we developed a draft questionnaire using a well-established method for measuring the kinds of responses sought by Indicators A to J. Then we used our Industry Reference Group and other social scientists to help us adapt some of the questions to the needs and peculiarities of each of the three settings. Through this procedure, three slightly different versions of the questionnaire were eventually produced (one for food & beverage sites, one for heritage sites, and one for nature sites). In designing the visitor questionnaire, we were able to use multiple measures for the more complex indicators (such as elaboration, attitudes and global evaluations) so that they might provide a more comprehensive picture about the impact of interpretation than a single measure alone could provide. This was possible for indicators A-D and I-J). In sequencing the questions in the survey form, we were careful to separate these sub-indicators from each other so that visitors would not recognise them as ‘going together.’ Each questionnaire was field tested on at least 80 visitors at multiple sites corresponding to each type of setting. This produced three data sets that we could then use for assessing the validity and reliability of each of the indicators. Using advanced statistical analysis procedures, we were able to determine that each of the indicators is indeed measuring some aspect of the kind of outcome it purports to measure (i.e. they are valid). In addition, we subjected the data from the field tests to rigorous reliability testing to make sure that the sub-indicators used to comprise some of the overall indicators were producing consistently meaningful data with respect to the indicator they are intended to make up. When there was any doubt, we deleted the sub-indicator from the questionnaire so that the items remaining have the strongest reliability they could have. For this reason, the number of items making up the same indicator in the three questionnaires sometimes varies, and so the questionnaires are not interchangeable. This is why you must be sure to use only the questionnaire that best matches your site’s needs. Results of the reliability tests are included in Appendix B. Resulting from these procedures are the three versions of the visitor questionnaire, each producing usable data on the performance of face-to-face interpretive programs for the type of setting it was designed for. Development and testing of the observation indicator Indicator K (visitors were provoked to interact with the guide / interpreter) is assessed using participant observation, which involves a staff member essentially posing as a visitor and systematically observing the interaction between an interpreter or guide and the audience. Following the observational ‘rules’ explained in Section 5, we field tested the observation procedures and recording form using real interpretive programs at Sovereign Hill and Port Arthur. In both cases, the procedures were found to be reliable. At Sovereign Hill, four independent observers recorded the same interactions in the same way almost all of the time (90 percent of interactions coded consistently) and at Port Arthur three observers were consistent in their coding 100 percent of the time.

57

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

APPENDIX B: INDICATOR GROUPINGS AND RELIABILITY COEFFICIENTS Notes to aid your understanding of Appendix B The following three tables show the indicators used in each type of setting (Food & Beverage, Heritage and Nature) to evaluate the effectiveness of interpretive programs. In each case, the table indicates whether the indicator is measured by a single item or uses multiple sub-indicators. The column labelled ‘Question Numbers’ lists the questions in the corresponding questionnaire that are used to measure each indicator. Look at each questionnaire to locate the questions that comprise each of the indicators it measures. The column labelled ‘Reliability’ shows the statistic called ‘Cronbach’s alpha’ which is a measure of the degree to which the multiple-item indicators vary consistently with each other and with the overall indicator they comprise. Alpha can vary from 0 to 1.0 (perfect reliability). An alpha of .60 is considered the lowest acceptable level of reliability for the types of indicators included in the Tool Kit. Indicators with alphas larger than .70 are considered to have at least moderately strong reliability, and those approaching or exceeding .80 are exceptionally reliable. All of the indicators in the Tool Kit have at least acceptable reliability, and most are strongly reliable. The reason some multiple-item indicators have fewer items than the same indicator has in other questionnaires is that they were found to be more reliable when they contained only some of the items. Food & Beverage Interpretation Indicators Indicator

Indicator Long Title

B

Elaboration (provoked to thought)

D

G

Positive global evaluation of interpretation at site Desire to participate in additional interpretive activities Desire to purchase a product or memento related to place Desire to stay longer

H

Desire to return for repeat visit

I

Positive word-of-mouth advertising

J

Visitors found it relevant and meaningful to their lives Visitors provoked to interact with the guide (interactive experience)

E F

K

Question Numbers* 1a, 1d, 1f, 1i

Reliability (alpha) .75

1b, 1g

.85

3a

NA

3d

NA

3b

NA

3c

NA

2a, 2b, 2c, 2d, 2e 1c, 1e, 1h

.86

Observation instrument

.62 NA

* These are the question numbers in the Visitor Questionnaire (Food & Beverage package) and across the top of the Evaluation Template for food & beverage sites (in Data Entry View).

58

INTERPRETATION EVALUATION TOOL KIT

Heritage Interpretation Indicators Indicator

Indicator Long Title

Question Numbers* 1e, 1j, 1o, 1s, 1u

Reliability (alpha) .77

A

Impact on current world view via empathy with historic period & people

B

Elaboration (provoked to thought)

1c, 1h, 1m, 1r, 1t

.83

C

Positive attitude toward heritage / heritage preservation

1d, 1i, 1n

.85

D

Positive global evaluation of interpretation at site

1a, 1f, 1k, 1p

.84

E

Desire to participate in additional interpretive activities

3a

NA

F

Desire to purchase a memento or souvenir related directly to site story

3d

NA

G

Desire to stay longer

3b

NA

H

Desire to return for repeat visit

3c

NA

I

Positive word-of-mouth advertising

2a, 2b, 2c, 2d, 2e

.89

J

Visitors found it relevant and meaningful to their lives

1b, 1g, 1l, 1q

.67

K

Visitors provoked to interact with the guide (interactive experience)

Observation instrument

NA

* These are the question numbers in the Visitor Questionnaire (Heritage package) and across the top of the Evaluation Template for heritage sites (in Data Entry View).

59

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Nature Interpretation Indicators Indicator

Indicator Long Title

Question Numbers* 1e, 1n, 1r, 1t

Reliability (alpha) .95

1c, 1h, 1l, 1q, 1s

.88

1d, 1i, 1m

.73

1a, 1f, 1j, 1o

.84

A

Impact on appreciation of indigenous connections to nature

B

Elaboration (provoked to thought)

C

Positive attitude toward nature conservation

D

Positive global evaluation of interpretation at site

E

Desire to participate in additional interpretive activities

3a

NA

F

Desire to purchase a memento or souvenir related directly to site story

3d

NA

G

Desire to stay longer

3b

NA

H

Desire to return for repeat visit

3c

NA

I

Positive word-of-mouth advertising

.90

J

Visitors found it relevant and meaningful to their lives

2a, 2b, 2c, 2d, 2e 1b, 1g, 1k, 1p

K

Visitors provoked to interact with the guide (interactive experience)

Observation instrument

.67

NA

* These are the question numbers in the Visitor Questionnaire (Nature package) and across the top of the Evaluation Template for nature-based sites (in Data Entry View).

60

INTERPRETATION EVALUATION TOOL KIT

APPENDIX C: VISITOR QUESTIONNAIRES SHOWING CODES FOR DATA ENTRY We recommend that you photocopy the questionnaire you are using onto a clear overhead transparency. This will make it easier for you to enter the data from each completed questionnaire. Simply lay the transparency on each questionnaire and enter the number that corresponds to each response given by the visitor.

IMPORTANT NOTE: Use the uncoded questionnaire master on the Tool Kit CD to make copies of the questionnaire for visitors to complete. Visitors should never see or use the coded questionnaires contained in this Appendix – these are for you to use only when entering your data.

61

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Food & Beverage Questionnaire Questionnaire #: ___ ___ ___ ___ Name of Site: _________________

Date: ___ ___ ___ ___

WE NEED YOUR OPINION! About the presentations & guided tours you attended today The purpose of this short questionnaire is to find out how you feel about the presentations or guided tours you attended or participated in today. Please know that there are no right or wrong answers to the questions, nor are some responses better or worse than others. We simply want to know your honest opinions about your experience today. THE QUESTIONNAIRE WILL TAKE LESS THAN 5 MINUTES OF YOUR TIME. THANK YOU! Instructions (do not answer these example questions): For each question, place an “X” on the line that best shows how you feel about the presentations and guided tours you attended today. Example 1: If you believe that the presentations and guided tours were extremely long, you would place a mark as follows: Overall, the presentations and guided tours I attended today: were long

X

were short :

:

:

:

:

:

Example 2: If you believe that the presentations and guided tours were neither long nor short, you would place a mark as follows: Overall, the presentations and guided tours I attended today: were long :

62

were short

X :

:

:

:

:

INTERPRETATION EVALUATION TOOL KIT

Question 1: Overall, the presentations and guided tours I attended today… A.)

7

6

made me curious

5

:

4

:

3

:

2

:

:

1

did not make me curious

7

were good

:

B.)

were bad

1

2

3

:

4

:

5

:

6

:

:

:

C.)

were relevant to me

7

did not make me think

1

6 :

5 :

4 :

3 :

2 :

1 :

were not relevant to me

D.)

2 :

3 :

4 :

5 :

6 :

7 made me think

:

E.)

were not connected to anything I care about

1

2 :

3 :

4 :

5 :

6 :

7

were connected to things I care about

1

did not make me want to talk about what I heard

:

F.)

made me want to talk about what I heard

7

5

6 :

:

4 :

3 :

2 :

:

G.)

1 were boring

2 :

3 :

4 :

5 :

6 :

7 were interesting

:

H.)

were connected to things I know about

7

6 :

4

5 :

:

3 :

2 :

1 :

were not connected to things I know about

I.)

intrigued me

7

6 :

5 :

4 :

3 :

2 :

1

did not intrigue me

:

63

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Question 2: Please indicate how much you would be inclined to tell another person each of the following things about this place: A.)

You should visit

7

6

5

:

4

:

3

:

2

:

:

1

You should not visit

7

The place is interesting

:

B.)

The place is boring

1

2

3

:

4

:

5

:

6

:

:

:

C.)

Coming here is worth the money

7

6 :

5 :

4 :

3 :

2 :

1 :

Coming here is not worth the money

D.)

Coming here is not enjoyable

1

2

3

:

4

:

5

:

6

:

7

:

:

Coming here is enjoyable

E.)

Coming here is worth the time

7

6 :

5 :

4 :

3 :

2 :

1 :

Coming here is not worth the time

Question 3: Please circle YES or NO for each statement. A.)

The presentations and guided activities I attended today made me want to attend/participate in another presentation or guided activity.

1 YES

2 NO

B.)

The presentations and guided activities I attended today made me want to stay longer.

1 YES

2 NO

C.)

The presentations and guided activities I attended today made me want to return for another visit in the future.

1 YES

2 NO

D.)

The presentations and guided activities I attended today made me want to purchase a product or memento related to this place.

1 YES

2 NO

Thanks for the generosity of your time! If you would like to tell us anything else about your visit today, please write it in the space below.

64

INTERPRETATION EVALUATION TOOL KIT

Heritage Questionnaire Questionnaire #: ___ ___ ___ ___ Name of Site: _________________

Date: ___ ___ ___ ___

WE NEED YOUR OPINION! About the presentations & guided activities you attended today The purpose of this short questionnaire is to find out how you feel about the presentations and guided activities you attended or participated in today. Please know that there are no right or wrong answers to the questions, nor are some responses better or worse than others. We simply want to know your honest opinions about your experience here today. THE QUESTIONNAIRE WILL TAKE LESS THAN 5 MINUTES OF YOUR TIME. THANK YOU! Instructions (do not answer these example questions): For each question, place an “X” on the line that best shows how you feel about the presentations and guided activities you attended today. Example 1: If you believe that the presentations and guided tours were extremely long, you would place a mark as follows: Overall, the presentations and guided tours I attended today:

were long

X :

:

:

:

:

:

were short

Example 2: If you believe that the presentations and guided tours were neither long nor short, you would place a mark as follows: Overall, the presentations and guided tours I attended today:

were long

were short :

:

:

X

:

:

:

65

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Question 1: Overall, the presentations and guided activities I attended today… A.)

were enjoyable

7

were meaningless

1

6

5

:

4

:

3

:

2

:

1

:

were unenjoyable

:

B.)

2

3

:

4

:

5

:

6

:

7

:

were meaningful

:

C.)

made me curious

7

6

5

:

4

:

3

:

2

:

did not make me curious

1

:

:

D.)

made protecting heritage seem less important

1

2

3

:

4

:

5

:

6

:

made protecting heritage seem more important

7

:

:

E.)

impacted my view of my own life

7

were bad

1

6 :

5 :

4 :

3 :

2 :

1

did not impact my view of my own life

7

were good

1

were not relevant to me

:

F.)

2 :

3 :

4 :

5 :

6 :

:

G.)

were relevant to me

7

did not make me think

1

6 :

5 :

4 :

3 :

2 :

:

H.)

2 :

3 :

4 :

5 :

6 :

7 made me think

:

I.)

made me value heritage preservation more

7

6 :

5 :

4 :

3 :

2 :

1

made me value heritage preservation less

7

impacted my view of today’s society

1

were not satisfying

:

J.)

did not impact my view of today’s society

1

2 :

3 :

4 :

5 :

6 :

:

K.)

were satisfying

7

6 :

66

5 :

4 :

3 :

2 :

:

INTERPRETATION EVALUATION TOOL KIT

L.)

were not connected to anything I care about

1

2 :

3 :

4 :

5 :

7

were connected to things I care about

1

did not make me want to talk about what I heard

7 :

made protecting heritage seem more justifiable

:

did not impact my ability to relate to people who lived then

6 :

:

M.)

made me want to talk about what I heard

7

6

5

:

4

:

3

:

2

:

:

:

N.)

made protecting heritage seem less justifiable

1

2

3

:

4

:

5

:

6

:

:

O.)

impacted my ability to relate to people who lived then

7

6 :

5 :

4 :

3 :

2 :

1

P.)

were boring

1

2

3

:

4

:

5

:

6

:

7

:

were interesting

:

Q.)

were connected to things I know about

7

did not make me want to know more

1

6

5

:

4

:

3

:

2

:

1

:

:

were not connected to things I know about

R.)

2

3

:

4

:

5

:

6

:

7

:

:

made me want to know more

S.)

impacted how I see some things about today’s world

7

6 :

5 :

4 :

3 :

2 :

1 :

did not impact how I see some things about today’s world

T.)

intrigued me

7

6 :

5 :

4 :

3 :

2 :

1

did not intrigue me

7

impacted how I see myself

:

U.)

did not impact how I see myself

1

2 :

3 :

4 :

5 :

6 :

:

67

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Question 2: Please indicate how much you would be inclined to tell another person each of the following things about this place: A.)

You should visit

7

The place is boring

1

6

5

:

4

:

3

:

2

:

1

:

:

You should not visit

B.)

2 :

3 :

4 :

5 :

6 :

7 :

The place is interesting

C.)

Coming here is worth the money

7

6 :

5 :

4 :

3 :

2 :

1

Coming here is not worth the money

:

D.)

Coming here is not enjoyable

1

2 :

3 :

4 :

5 :

6 :

7

Coming here is enjoyable

1

Coming here is not worth the time

:

E.)

Coming here is worth the time

7

6 :

5 :

4 :

3 :

2 :

:

Question 3: Please circle YES or NO for each statement. A.)

The presentations and guided activities I attended today made me want to attend/participate in another presentation or guided activity.

1 YES

2 NO

B.)

The presentations and guided activities I attended today made me want to stay longer.

1 YES

2 NO

C.)

The presentations and guided activities I attended today made me want to return for another visit in the future.

1 YES

2 NO

D.)

The presentations and guided activities I attended today made me want to purchase a memento or souvenir directly related to this place.

1 YES

2 NO

Thanks for the generosity of your time! If you would like to tell us anything else about your visit today, please write it in the space below.

68

INTERPRETATION EVALUATION TOOL KIT

Nature Questionnaire Questionnaire #: ___ ___ ___ ___

Date: ___ ___ ___ ___

Name of Site: ________________________

WE NEED YOUR OPINION! About the presentations & guided activities you attended today The purpose of this short questionnaire is to find out how you feel about the presentations and guided activities you attended or participated in today. Please know that there are no right or wrong answers to the questions, nor are some responses better or worse than others. We simply want to know your honest opinions about your experience today.

THE QUESTIONNAIRE WILL TAKE LESS THAN 5 MINUTES OF YOUR TIME. THANK YOU!

Instructions (do not answer these example questions): For each question, place an “X” on the line that best shows how you feel about the presentations and guided activities you attended today. Example 1: If you believe that the presentations and guided activities were extremely long, you would place a mark as follows: Overall, the presentations and guided activities I attended today: were long

X :

were short :

:

:

:

:

Example 2: If you believe that the presentations and guided activities were neither long nor short, you would place a mark as follows: Overall, the presentations and guided activities I attended today: were long

were short :

:

:

X :

:

:

69

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Question 1: Overall, the presentations and guided activities I attended today… A.)

were enjoyable

7

6

5

:

4

:

3

:

2

:

1

:

were unenjoyable

:

B.)

were meaningless

1

made me curious

7

2 :

3 :

4 :

5 :

6 :

7 were meaningful

:

C.)

6 :

5 :

4 :

3 :

2 :

1 :

did not make me curious

D.)

made conserving nature seem less important

1

2

3

:

4

:

5

:

:

made conserving nature seem more important

:

did not impact my appreciation of the values indigenous people attach to the land

6

:

7

:

E.)

impacted my appreciation of the values indigenous people attach to the land

7

6

5

:

4

:

3

:

2

:

1

:

F.)

were bad

1

2 :

3 :

4 :

5 :

6 :

7

were good

1

were not relevant to me

:

G.)

were relevant to me

7

did not make me think

1

6 :

5 :

4 :

3 :

2 :

:

H.)

2 :

3 :

4 :

5 :

6 :

7 made me think

:

I.)

made me value nature conservation more

7

6

5

:

4

3

:

:

2

:

1

:

:

made me value nature conservation less

J.)

were satisfying

7

6 :

70

5 :

4 :

3 :

2 :

1 :

were not satisfying

INTERPRETATION EVALUATION TOOL KIT

K.)

were not connected to anything I care about

1

2 :

3 :

4 :

5 :

6 :

7

were connected to things I care about

1

did not make me want to talk about what I heard

:

L.)

made me want to talk about what I heard

7

made conserving nature seem less justifiable

1

6 :

5 :

4 :

3 :

2 :

:

M.)

2

3

:

4

:

5

:

:

made conserving nature seem more justifiable

:

did not impact my appreciation of indigenous views of the land

6

:

7

:

N.)

impacted my appreciation of ndigenous views of the land

7

6 :

5 :

4 :

3 :

2 :

1

O.)

were boring

1

2

3

:

4

:

5

:

6

:

7

:

were interesting

:

P.)

were connected to things I know about

7

6

5

:

4

:

3

:

2

:

1

:

:

were not connected to things I know about

Q.)

did not make me want to know more

1

2

3

:

4

:

5

:

:

made me want to know more

:

did not impact my appreciation of indigenous views of wildlife

6

:

7

:

R.)

impacted my appreciation of indigenous views of wildlife

7

6 :

5 :

4 :

3 :

2 :

1

S.)

intrigued me

7

6 :

5 :

4 :

3 :

2 :

1

did not intrigue me

:

T.)

did not impact my appreciation of the historic relationship that indigenous people have with the land

1

3

2 :

:

4 :

5 :

6 :

7 :

impacted my appreciation of the historic relationship that indigenous people have with the land

71

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Question 2: Please indicate how much you would be inclined to tell another person each of the following things about this place: A.)

You should visit

7

The place is boring

1

6 :

5 :

4 :

3 :

2 :

1 :

You should not visit

B.)

2 :

3 :

4 :

5 :

6 :

7 :

The place is interesting

C.)

Coming here is worth the money

7

6 :

5 :

4 :

3 :

2 :

1

Coming here is not worth the money

7

Coming here is enjoyable

1

Coming here is not worth the time

:

D.)

Coming here is not enjoyable

1

2 :

3 :

4 :

5 :

6 :

:

E.)

Coming here is worth the time

7

6 :

5 :

4 :

3 :

2 :

:

Question 3: Please circle YES or NO for each statement. A.)

The presentations and guided activities I attended today made me want to attend/participate in another presentation or guided activity.

1 YES

2 NO

B.)

The presentations and guided activities I attended today made me want to stay longer.

1 YES

2 NO

C.)

The presentations and guided activities I attended today made me want to return for another visit in the future.

1 YES

2 NO

D.)

The presentations and guided activities I attended today made me want to purchase a memento or souvenir directly related to this place.

1 YES

2 NO

Thanks for the generosity of your time! If you would like to tell us anything else about your visit today, please write it in the space below.

72

INTERPRETATION EVALUATION TOOL KIT

APPENDIX D: OBSERVATION FORM For recording interaction between visitors and presenters OBSERVATION FORM Observation number: ……… Presentation or guided activity: ……………………………………………………… Date: …………………

Time: …………

Observer: ……………………….

Location: …………………………… Number of visitors in the group: …….. Who initiated the interaction? What was the response?

Initiated by presenter

Initiated by visitor

Verbal (used words)

Non-verbal or physical

Comments about the interaction today: ………………………………………………………………………………………….. ………………………………………………………………………………………….. ………………………………………………………………………………………….. …………………………………………………………………………………………..

73

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

Glossary of Terms Anonymous – since you will not be collecting the names of the visitors who complete your questionnaire, your respondents are anonymous. If you collect the names and contact details of respondents (for example, in order to conduct a follow-up telephone survey), then you cannot claim that respondents are anonymous and this changes the nature (and ethical implications) of your evaluation. Bias – this is error introduced into a study, and can be the result of any number of decisions: the way respondents are recruited and selected, the order and wording of questions in the questionnaire, the inclusion and exclusion of response categories, or how you record the data. The methods and questionnaires used in this study have been designed with great care in order to minimise bias, though it can never be completely eliminated. Cell – this is the area within the data file where you type in numbers. The internal parts of a table (like the Observation Form in Appendix D) are also called ‘cells’. Census – this is all of your visitors. When you have low visitor numbers (say 50 or fewer), it’s a good idea to have all of them fill out the questionnaire on a given day, rather than selecting a sample. Even if you have more than 50 visitors, getting a census is always preferred, if you have the time and resources to get one. Code (coding) – a code is the number assigned to the response given by a visitor to each question in the Visitor Questionnaire. The questionnaires in Appendix C show the coding for each question. Data – these are the responses that visitors give you on the questionnaire - when we talk about ‘entering the data’, what we mean is that you will type these numbers into the Evaluation Template that you have created from the Evaluation Tool Kit application on the CD. See also View. Data Entry – see View. Evaluation – this is a type of research that is used to assess and judge some aspect of a product or service. This Tool Kit assesses the outcomes that interpretation delivers for visitors. Other types of evaluation might judge the content or the quality of the delivery. Evaluation Templates – these are the application files you have on your CD, and there are three options depending on the setting in which you have collected data: Food & Beverage; Heritage and Nature. We have prepared these to make your data entry from the questionnaires as simple as possible. Each template has two view tabs: Data Entry and Tables & Charts. See also View. Face-to-face Interpretation – any communication with visitors that is done by staff is a type of personal or face-to-face interpretation, whether delivered individually (e.g. through a talk, demonstration, guided walk or guided tour) or as a group (e.g. a theatre performance or a game that involves visitors). The questionnaire and observation methods used in this Tool Kit focus on the outcomes of face-to-face interpretation, and different methods are needed to evaluate the outcomes of your non-personal media. 74

INTERPRETATION EVALUATION TOOL KIT

FAQs – Frequently Asked Questions. Fieldwork (field methods / materials / kit) – this is the part of the research when you are on-site with your visitors. It includes all the preparation, activities and materials required to administer a questionnaire to visitors and to observe visitors engaging in face-to-face interpretation, with the least possible bias. Indicators – these are the measures we have designed to capture the outcomes identified by our industry partners as important for their interpretation. Since we can’t really ‘know’ how well interpretation delivers outcomes, for example its impact on visitors’ world views or on visitors’ attitudes toward conservation, we have to rely on measures such as visitors’ perceptions of these outcomes. The indicators in this Tool Kit have been developed, piloted and reliability-tested so you can be sure that they are the best possible way of measuring these outcomes, and you should not delete, add or alter the format or wording of any of these. Interpretation Evaluation – see Evaluation. Item or Sub-indicator – these are the individual questions in the questionnaire that we have written to measure a particular aspect or dimension of an outcome. The formulas in the Evaluation Template add these together to provide the best possible measure of the various indicators, although the number and combination of subindicators vary for each indicator and even between different types of tourism operations. These have been developed, piloted and reliability-tested so you can be sure that together they provide the best possible way of measuring interpretation’s outcomes, and you should not alter the format or wording of any of these. Non-personal Media – all communication delivered to visitors without contact with a member of staff, through print or electronic media such as panels, exhibits, signs, selfguided brochure trails, audio installations, portable audio systems, videos, and interactive computers. Although the responses we get from visitors may be influenced by non-personal media, the questionnaires have been designed and worded so that visitors focus on their experiences with your face-to-face interpretation. Observation Form – this is a template that can be used to systematically record your visitors’ interactions with their guide, who initiated the interaction and whether it was verbal or non-verbal. This was the only outcome identified by the industry participants in this study that could not be captured in a visitor questionnaire. Population – this is the group or groups of people who you are trying to say something about in your evaluation. Generally speaking, this Tool Kit is designed to evaluate the outcomes of the interpretation you provide for English-speaking visitors 18 years of age and over. Of course, you have other visitors, but since this Tool Kit has limitations for use with non-English speaking visitors and children, you will need to select a probability sample from the subset of English-speaking adult visitors. Probability Sampling – sometimes labelled random sampling, this involves methods for selecting respondents that ensure that everyone in the population (which in your case is English-speaking adult visitors) has an equal chance of being selected. If done correctly, then you can be reasonably confident that those who complete the 75

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

questionnaires are not a biased subset of your visitors, and therefore you can be reasonably confident that their responses are representative of your visitors. If you select a probability sample of a limited group of your visitors (for example, just adults as we suggest in this Tool Kit), then the responses will be representative of just your adult visitors. Questionnaires – these are the files on the CD and the documents in Appendix C that you will give to visitors to complete. The precise layout, order, wording and response options make these powerful instruments for collecting data relevant to the indicators this Tool Kit is designed to measure. There are three versions with subtle but important differences, depending on the type of tourism operation you work in: nature-based, heritage or food & beverage. Refusal – when a visitor declines to complete a questionnaire for whatever reason, this is referred to in research as a refusal. In this Tool Kit, good field methods and a questionnaire that does not take the visitor too long to complete are used to reduce the number of refusals. If you report your results to someone else, it is a good idea to include the number of refusals, as a low number of refusals helps to minimise bias and adds credibility to your results. Reliability – if an evaluation research method is said to be reliable, it means that different users of the method (including sampling, data collection, and data analysis) will be able to use it and that these different users will come to the same conclusions if they use the methods on the same visitors. New research methods and instruments are designed and implemented every day, but if they have not been reliability tested, they will not be very useful in measuring what you are trying to measure. Respondent – each visitor who completes a questionnaire. Response Rate – your response rate is the proportion of your sample of visitors who agree to and who actually complete the questionnaire. It is best to report the usable response rate (see Usable Responses). If you can estimate the size of your sample (English-speaking adult visitors during the data collection period who are given an opportunity to complete the questionnaire) and you know how many visitors complete the questionnaire, then it is straightforward to calculate the response rate. It is important to report this with your findings, as a good response rate helps to minimise bias and adds credibility to your results. Sample – this is the group of visitors who are given an opportunity to fill out your questionnaire on a given day. If you have low visitor numbers, you can probably invite or approach every visitor to fill out a questionnaire (a ‘census’), but when you have a lot of visitors (say more than 50) on a given day, you might want to select a sample of them to fill out the questionnaire. Of course, getting a census is always preferred, if you have the time and resources to get it. Sample Size – this is the number of visitors you approach to complete the questionnaire. Usually this will be a much smaller number than your total number of visitors or even the population (English-speaking adult visitors). Usually bigger is better, as the larger the sample size the more confident you can be that the findings are representative of the population of visitors. The sample size depends on how variable your visitors are, and how difficult and how costly it is for you to approach them. 76

INTERPRETATION EVALUATION TOOL KIT

Sampling – see Probability Sampling. Sub-indicator – see Item. Tables & Charts – see View. Usable Responses – this is the number of completed questionnaires you end up with, after you have discarded those that are incomplete. Validity – If an evaluation research method is said to be valid, it means that it measures what it claims to be measuring. This is often confirmed by using more than one method to measure the same thing. We’ve used a number of ways to ensure that the methods and instruments in this Tool Kit are valid measures of visitors’ perceptions of the outcomes of the interpretation. View – there are two views in the Evaluation Template. One is the Data Entry view, and the other is the Tables & Charts view. You move back and forth between Data Entry and Table & Charts views by clicking on these words at the top of your screen.

77

Methods and Tools for Assessing the Effectiveness of Face-to-Face Interpretive Programs

AUTHORS Prof Sam H. Ham Prof Ham is Director of the Center for International Training and Outreach and Professor of environmental communication and international conservation in the University of Idaho's College of Natural Resources, Department of Resource Recreation and Tourism. He also holds a courtesy appointment as Adjunct Professor at Monash University. Dr Ham teaches graduate courses in interpretation, environmental communication, international issues in nature conservation, and recreation and tourism management. His research has focused on interpretation and strategic communication. He has authored more than 200 publications including two widely acclaimed books on interpretive methods. Email: [email protected]

Prof Betty Weiler Prof Weiler is Director of the Monash Tourism Research Unit and Professor of Tourism at Monash University and has been teaching, researching and writing in the area of tourism marketing, management and planning for twenty years. She has published over one hundred journal articles and book chapters, presented 15 invited addresses and plenaries, and delivered dozens of symposia papers and workshops. Dr Weiler has managed or co-managed over a dozen major research projects and seven international and national consultancy projects related to ecotourism, interpretation and tour guiding, and is known for her industry-relevant and applied research focus. Email: [email protected]

78

Interpretation Evaluation Tool Kit Over the past five decades, tourism providers

which to base decisions about the continued

across the world have recognised the

development of your interpretive offerings, but

importance of high quality interpretation as

also that it serves its intended purpose—to

central to their mission.

strengthen your use of interpretation to

Being able to document the achievements of your interpretive program influences not only

achieve the kind of success your organisation is trying to achieve.

budgets and financial decisions, but it also

The Interpretation Evaluation Tool Kit is

provides benchmarks needed for monitoring

available through the STCRC's online

and continually improving the interpretive

bookshop (www.crctourism.com.au/bookshop).

services and products you offer. Our

Alternatively complete the order form available

overarching purpose in developing this Tool Kit

from the website (under 'Bookshop' + 'STCRC

is to give you a set of practical tools that allow

Forms') and send to STCRC.

you, with minimum bother and complexity, to reliably and validly evaluate your interpretive offerings, and thereby enhance your effectiveness as an organisation.

CRC for Sustainable Tourism Pty Ltd ABN 53 077 407 286 PMB 50 Gold Coast MC QLD 9726 Australia Telephone: +61 7 5552 8172

We hope you enjoy using the Interpretation

Facsimile: +61 7 5552 8171

Evaluation Tool Kit and find that it not only

Email: [email protected]

provides you with valuable information on

Web: www.crctourism.com.au