Using research in practice - Wiley Online Library

3 downloads 1353 Views 99KB Size Report
result favouring either alternative then there might well have been tensions between research and practicalities. Does it help to personalize the questionnaires?
Using research in practice Blackwell Publishing Ltd.

A quest for questionnaires Using research in practice, Andrew Booth

Andrew Booth If using research in practice conjures up visions of librarians pouring over issues of Health Information and Libraries Journal or the Journal of the Medical Library Association, then think again. Admittedly, the increasing number of researchorientated articles appearing in our professional literature is a phenomenon very much to be welcomed. Nevertheless this merely represents the ‘tip of the iceberg’ where evidence-based information practice is concerned. Not to be underestimated is the contribution to be made by the more general outputs of research and development programmes, whether within health or elsewhere. Take, for example, the following scenario: As the recently appointed Primary Care Knowledge Manager for Apathey Primary Care Trust you have been charged with conducting an information audit to assess the information and training needs of your health community. You decide to supplement a series of planned interviews with local stakeholders with more comprehensive coverage using a questionnaire. Being relatively inexperienced in research methodologies you wonder whether there might be some way of increasing the notoriously low response rates within your Trust. Your attempts to be evidence based appear to be rewarded immediately as you retrieve two very pertinent systematic reviews. The former is the first published output from a Cochrane methodology review, entitled ‘Increasing response rates to postal questionnaires’.1 The latter is from the Health Technology Assessment NHS R&D HTA Programme under the title ‘Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients’.2 You decide to survey the contents of both reviews to answer a list

of questions that you have compiled about questionnaire response rates. Questionnaires are probably the most common research instrument that librarians encounter in their day-to-day practice. Your first questions refer to the choices that you have with regard to the physical production of your questionnaire: Appearance of the questionnaire Should I use coloured paper or stick to white? Eight trials including a total of 14 797 participants provide an answer to this question.1 These conclude that your chances of getting a response are 6% greater if you use coloured paper than if you use white paper (odds ratio 1.06). However the 95% confidence interval for using coloured paper instead of white runs from 0.99 to 1.14. As this value crosses unity (i.e. an odds ratio of 1.00) we can conclude that the observed effect is not statistically significant. Conclusion for practice. Clearly there is not a convincing reason to conclude that coloured paper will produce a better response rate. If the cost of coloured paper is no different from that for white paper then you may decide to experiment with an alternative colour. There is a certain appealing argument (unsubstantiated by this research) that placing a questionnaire on brightly coloured paper might reduce the chance of it getting mislaid or mistaken on a busy GP’s desk. Perhaps your library has a distinctive colour to its ‘livery’ that might be strengthened by association with use of this colour in marketing and survey activity. However, if you are planning to photocopy questionnaire returns, it will be very unhelpful to use a dark-coloured paper with regard to legibility and even the economics of consumption of photocopier toner! A legitimate reason for using

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.53–56

53

54

Using research in practice

different colours of paper was illustrated recently in a project that myself and colleagues from the Trent Institute for Health Services Research undertook to examine the information and training needs of staff in social care. Here we used a different colour for each ‘patch’ (e.g. Nottinghamshire, Sheffield, Derbyshire) and this helped in compiling the returns for each locality when questionnaires were returned anonymously by post. Of course, a similar effect might be obtained by using a single running number sequence, although in this instance we had no reason to discriminate respondents at an individual level. It is interesting to consider the effect that this evidence has had upon our decision-making process. As there is no compelling research-based reason to support our choice, we are able to admit other factors into the equation, such as cost and logistics. If, however, there had been a strong result favouring either alternative then there might well have been tensions between research and practicalities. Does it help to personalize the questionnaires? Thirty-eight trials involving 39 210 participants were identified to address this question.1 In this case your chances of getting a response are 16% greater if you personalize the questionnaires than if they are not personalized (odds ratio 1.16). The 95% confidence interval for personalizing questionnaires versus not doing so runs from 1.06 to 1.28—a clear result in favour of personalization. Conclusion for practice. Personalizing a questionnaire is potentially time-consuming and resource intensive. However, if you have access to the required human resources then it appears to be worth doing. Practical considerations, such as the need to get all questionnaires distributed within a narrow window of opportunity, might well run counter to this finding. The number of questionnaires to be distributed may well be an overwhelming consideration. In the abovementioned social care project we did not have the administrative and clerical support needed to personalize the 595 questionnaires required. This illustrates that occasionally an ‘evidence-based’ option might be denied to us because of practical

considerations—a not-unfamiliar situation within the context of health care rationing. Of a total of six factors relating to appearance examined by the review,1 only one other appeared to have a significant positive effect—using coloured ink rather than standard ink (OR 1.39; 95% CI 1.16–1.67). However this was the result of only a single trial. Although using a brown envelope in preference to a white one seems to have a marked effect (OR 1.52) the confidence interval is wide and crosses unity (95% CI 0.67–3.44) indicating that this was not significant. Methods of delivery Does a stamped addressed envelope increase response over and above a business reply or franked envelope? Fourteen trials involving 38 259 participants address this particular issue.1 There is a clear effect in favour of the stamped addressed envelope (OR 1.26) with a narrow confidence interval (95% CI 1.13–1.41). Your chances of getting a response are therefore 26% better by using a stamped addressed envelope. Interestingly there is some evidence, although not statistically significant, that a stamped outward envelope performs worse than a franked one (OR 0.95; 95% CI 0.88–1.03) Conclusion for practice. The method of return is very much determined by organizational policies, constraints and facilities. In practical terms it may be easier to be self-sufficient within your library by enclosing stamped addressed envelopes. Franked envelopes may require the participation of a central post room with whom relationships may already be fraught because of the volume of the actual questionnaires themselves! Business reply envelopes require extra effort and resources if they are to be produced and licensed—if these costs have not been anticipated they may prove prohibitive. However, many organizations are very wary about bulk purchase of postage stamps because it might present problems for financial audit, with the additional fear that they might be used for purposes other than the project. Here, we have a vivid demonstration of how the wider picture may limit our options to pursue an

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.53– 56

Using research in practice

evidence-based approach. Disconcertingly, the single most important delivery factor is that of recorded delivery versus standard postal delivery (OR 2.21; 95% CI 1.51–3.25)—a sure way to get your manager’s alarm bells ringing! A more realistic aspiration might be to send the questionnaires first class (OR 1.12; 95% CI 1.02– 1.23). Incentives Should I offer respondents the chance to win a bottle of champagne? 45 trials involving 44 708 participants show1 that a non-monetary incentive is advantageous over no incentive (OR 1.19; 95% CI 1.11–1.28). However, two other incentive strategies perform better than a non-monetary incentive. Providing a monetary incentive has a greater than two-fold (202%) advantage over no incentive (OR 2.02; 95% CI 1.79–2.27). Perhaps more practically, offering an incentive with the questionnaire versus only offering that to those who respond also provides a favourable effect (OR 1.71; 95% CI 1.29–2.26). Conclusion for practice. Although your organization’s ability to pay an incentive may be limited, particularly if you are working within a public sector organization such as the NHS or a university, it is certainly worth giving some thought to the type of incentive that might appeal to your particular audience. Sometimes an item of equal value to another may have greater appeal because of how it is perceived (compare book token versus bottle of champagne). Only a detailed knowledge of your audience—its drivers and its value systems—will help you get the most of any incentive that you (or your sponsors) have to offer. Communication Should I give a deadline date for response? Four trials, involving 4340 participants, examined this particular issue.1 The overall effect is unity (OR 1.00) with the 95% confidence interval in the range 0.84–1.20. In other words, there is an equal

likelihood of return regardless of whether there is a response deadline or not. Conclusion for practice. Conventional wisdom has it that you should always have a response deadline when sending out a questionnaire. We can see from the above that this appears unsubstantiated by rigorous research. Nevertheless there are various other factors that might legislate towards having a deadline date, other than an anticipated effect on the response rate. There is the ethical issue of whether it is desirable to have busy health practitioners spending time filling out a questionnaire when we know that we will not be using their responses as we have already collated returns. Secondly, there is the project management aspect within which we might find it helpful to set a definite date on which to commence analysis rather than wait until the (optimistic) flood dries up to a trickle. Conclusion In taking a topic that could practically involve any library or information worker, whether involved in the initiation of a service or its ongoing monitoring and evaluation, we hope to have demonstrated two specific truths about evidencebased information practice: 1 That it is feasible to use research evidence in a practical, and hopefully realistic, setting. 2 That the evidence base is not confined to the health information press, nor even to the information science literature in general, but can be identified within a wider body of research to which we will increasingly need to gain access. In fact the questions covered above, dealing specifically with response rates, are a mere fraction of the issues related to questionnaire design and administration covered by the research literature.2 Conflict of interest should be sufficient for me not to give undue emphasis, for example, to the fact that having a university as sponsor or source increases your chance of response by 31% (OR 0.31; 95% CI 1.11–1.54)1! Perhaps the greatest disadvantage under which we all have to labour is that a more interesting questionnaire is advantageous over a less interesting questionnaire by a factor of almost 2.5 times (OR 2.44; 95% CI

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.53–56

55

56

Using research in practice

1.99–3.01).1 That rules out asking anything about books and libraries then! Of course, with our critical appraisal skills duly honed, we will resist the temptation to be too accepting and uncritical about such research. As the authors themselves point out, without information about what the baseline response would have been, some strategies may only succeed in pushing 70% response rates up to 80% while others may work particularly well at the lower end of the response rate continuum. Neither can we conclude that combining all the successful strategies together continues to increase success, particularly when an acceptable threshold has been reached. One thing is sure, however: informing your questionnaire activities through the results of

systematic reviews is infinitely preferable to basing them upon the folklore of ‘how-to-do-it’ textbooks. It has the added advantage of allowing you to claim that, in this aspect at least, you are practising evidence-based information practice!

References 1 Edwards, P., Roberts, I., Clarke, M., DiGiuseppi, C., Pratap, S., Wentz, R. & Kwan, I. Increasing response rates to postal questionnaires: systematic review. British Medical Journal 2002, 324, 1183–5. 2 McColl, E., Jacoby, A., Thomas, L. Soutter, J., Bamford, C., Steen, N., Thomas, R., Harvey, E., Garratt, A. & Bond, J. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technology Assessment 2001, 5(31), 1–250.

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.53– 56