Usability Testing: Revisiting Informed Consent procedures ... - CiteSeerX

3 downloads 489 Views 43KB Size Report
procedures for testing Internet sites. Oliver K. ... testing involving internet technology. ..... McKenzie, S. (2000) Child Safety on the Internet, Master of Arts thesis, ...
Usability Testing: Revisiting Informed Consent procedures for testing Internet sites Oliver K. Burmeister Swinburne Computer-Human Interaction Laboratory School of Information Technology Swinburne University of Technology [email protected]

Abstract This paper explores issues of professional, ethical conduct in usability testing centering around the concept of ‘informed consent’. Previous work on informed consent has been in homogeneous geographic locations. With Internet sites being developed at a prodigious rate, these procedures need to be revisited for their applicability to heterogeneous locations, in terms of culture, business practice, language and legal requirements. Some previously valued principles might now be considered discretionary, that is their applicability has situational specificity. Other principles are mandatory.

Keywords: usability, professionalism, informed consent, remote testing 1.0 Introduction What challenges to ethical principles of informed consent are posed by (remote) usability testing of Internet sites? This relatively new technology introduces elements of risk to people who participate in usability testing. Two areas in particular are explored in this paper. Firstly, remote testing introduces variables to do with how to appropriately treat people, especially if the observer and the participant are separated not only geographically, but also culturally. Hammontree, Weiler and Nayak (1994) add the use of distributed usability labs as another form of remote testing; this is not a form of remote testing that is addressed in this paper. Castillo and Hartson (1999) also add the variable of remoteness in time. Though the latter is not explored directly in this paper, in the sense of the usability test being carried out at one time and at a later time developers looking at the test recordings. Indirectly this is explored in that the first two cases deal with geographical (including intercontinental) remoteness, and thus dealing with differing times zones. It is the author’s view that the principles espoused should apply equally to remoteness in time; however this will need further research. Castillo and Hartson also define remoteness as the user being remote from the (local) developer. This follows Nielsen’s (1996a) argument (though Nielsen does not use the terms in this way) that an advantage of web based remote testing is that the participant being tested can be remote from the usability expert; for Nielsen such remoteness can involve multiple countries. Nielsen (1996b) argues that if a product is intended for use abroad then international usability testing with real users doing real tasks is required. Similarly Hammontree, Weiler and Nayak (1994, p. 25) give the example that “several Hewlett Packard development teams in the United States design custom 1

software for specific user groups located in Europe, Australia, and Asia-Pacific. In such instances no representative users are available locally and interactive user testing with representative users is only feasible if it can be performed remotely.” Issues in remote testing have so far concentrated on technical issues (Castillo and Hartson, 1999; Hammontree, Weiler and Nayak, 1994), cross-cultural issues for design and usability generally (Nielsen, 1996a; Nielsen 1996b). This paper explores the issue of remoteness from the viewpoint of its implications for informed consent. Secondly, testing live Internet sites means risking participant exposure to material not part of the test in a direct manner. Participants might accidentally or deliberately access sites not intended to be accessed by the usability engineer. Material at those sites might be inappropriate because the participants are minors, or because of cultural, religious, moral or personal reasons. Professional usability testing has grown in importance as part of the development of quality software products. Usability testing (in the IT context) provides the supporting basis that a particular interface meets the design specifications from the point of view of the users and is a necessary component of software development. Relating this to ethics, there is the argument by Gotterbarn (2000) that professional software development should exemplify appropriate interpretations of the profession’s code of ethics. That is, there are moral implications in professional behaviour and the public’s perception of whether that behaviour is acceptable or not is shaped by publicly available codes of conduct. Specifically however, the focus of this paper is on informed consent procedures, with particular emphasis on how this relates to usability testing involving internet technology.

1.1 Quality issues Miller (1999), one of the architects of the new ACM/IEEE joint Code of Ethics (Gotterbarn, Miller, Rogerson, 1999), says that in areas other than software, quality is a tightly guard commodity. He sites examples of money-back guarantees for white goods and even for the quality of food bought through retail outlets. Yet as he points out, society is strangely forgiving when it comes to software. He says that there is neither an expectation of quality amongst computing professionals, nor amongst consumers of software products. His explanation is that: “We often forget how relatively young the culture of computing is. We are at the start of creating a society based, at least in part, on computing. The traditions of computing are being formed now. It behooves us to create ethical, wise traditions. And a tradition of high quality software is an ideal we should be seeking in the computing professions and as consumers. One way in which computing professionals can earn the trust of the public is to insist on higher quality software.” (Miller, 1999, p. 2) Perhaps this is because people do not understand what quality for software means? Miller’s definition of quality is measuring the performance of the software. Usability testing the software for quality would give the user perspective on whether it is a quality product or not, independent to a significant extent of the software engineering measures Miller espouses.

2

In a personal communication with the author Miller (2000) wrote: “All professionals have responsibilities and privileges due to their role. Computer professionals are trusted to make technical decisions about software and hardware. These decisions require sensitivity to human values as well as technical expertise. The interaction of human values and technical decisions is particularly important in the area of software quality. The question of "how good is good enough" requires values clarification … analyses and predictions that cannot be based on certainty, but should be based on measurement.” Whilst Miller speaks of the need for change, in the view of this author the attitude of accepting poor quality software products Miller refers to is already changing. Increasingly quality is demanded of software products and usability engineering is one field that is helping develop quality products. The process of quality development is greatly aided by appropriate usability testing of products during the software development process.

1.2 Lessons for Informed Consent from Software Engineering Informed consent is part of the larger issue of ethical experimentation. There are many well established procedures to do with aspects of selecting participants, what they should be told, written consent form usage, the storage and disposal of data collected. In this paper these procedures will be revisited for the internet. This necessarily involves considering issues of internationalisation. For instance, laws concerning the treatment of human participants vary between countries. Not only this, but such laws also vary between states of the same country (as is the case in the US). Yet from a professional ethics view point this paper suggests there may be a core set of coherent principles that span international and state boundaries. Miller (2000) says that he “cannot imagine that the research community or the business community will ever reach consensus about which techniques should be required. The informed consent form does require mutually agreed to measurements of some kind. This seems like a minimum level of professionalism.” The paper seeks to synthesize a broad range of concerns to assist in recommending a number of principles that span international boarders, and are indicative of professional behaviour in usability testing. These principles are discussed as being within one of two category types, “mandatory” and “discretionary”. That, is, there are some principles that “mandatory”, they must be adhered to, these are inviolable. There are others that “discretionary”, they should be adhered to, but are situational and hence subject to contextual interpretation. Category differences also depend on the type of usability testing that is being conducted. Given the heterogenous nature of remote usability testing, the author agrees with Miller (2000) who in the context of software engineering says informed consent procedural compliance is situation dependent. However, the author does make a category distinction not made by Miller, that some principles are mandatory.

3

The paper is both a review of the literature and of current practice for informed consent. The next section begins with a review of what informed consent is aimed at achieving or preventing. The paper goes on to examine procedures for informed consent, giving an example from SCHIL1 . The paper concludes with a discussion of how informed consent procedures might be enhanced to better aid the HCI professional in their work in remote (international) usability testing.

2.0 Informed consent in usability testing Policies regarding informed consent in usability testing are developed by organisations on the basis of generally agreed principles concerning the treatment of human participants. Those principles plus additional ones are enumerated below. Seven of these principles are derived from the related discussion in Dumas and Redish (1994, pp. 205-208), though in their presentation they only view principles P2, P3 and P4 as principles of informed consent. They see their other principles as part of the wider issues involved in the legal requirements that need to be met prior a usability test. In the view of the author, and given that the context of this paper spans legal (geographical) boundaries, these issues are inseparable from the process of obtaining informed consent. One of the additional principles was suggested by Sanderson (2000) in a review of Dumas and Redish on usability testing. The last two principles have not previously been applied in the context of usability testing. All the principles have been extended beyond existing literature to address remote testing, based on examples of current practice as assessed by contributions made by European, North American and Australian members of an online newsgroup for usability engineers, of which the author is a member (Howard, 2000) 2. The author invited members to contribute examples of their practices in informed consent. Though contributions came from three continents, all contributors were from western countries resulting in a possibly mono-cultural perspective on the issue.

2.1 Minimal risk (P1) Usability testing should not expose participants to more than minimal risk. Though it is unlikely that a usability test will expose participants to physical harm, psychological or sociological risks do arise. If it is not possible to abide by the principle of minimal risk, then the usability engineer should endeavour to eliminate the risk or consider not doing the test. If the test needs to go ahead despite the risk then there are well established policies put out by many of the psychological societies that can serve as a basis for ensuring the protection of the rights of participants (APA, 1997). Dumas and Redish (1994) citing the Federal Register state that minimal risk means that “the probability and magnitude of harm or discomfort anticipated in the test are not greater, in and of themselves, than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” 1

In 1996 SCHIL was recognised as a major research centre within Swinburne University of Technology and Australia's first full professorial chair in the area of HCI was created (SCHIL, 2000). 2 The policy of this newsgroup does not permit direct quoting of postings. The council that administers the policies of the newsgroup is concerned to keep the group a private list for HCI professionals only and is also concerned to avoid the list being accessed by spammers. The author has been granted permission to reference the group only in ways done in this paper.

4

(p. 205) Similarly the Australian ‘National Statement on Ethical Conduct in Research Involving Humans’ (Commonwealth of Australia, 1999), has a principle of beneficence, which involves maximizing the possible benefits and good for the subject, while minimizing the amount of possible risks and harm.

2.2 Information (P2) Informed consent implies information is supplied to participants. Information to include suggested by Dumas and Redish (1994), contributors from the above mentioned newsgroup (Howard, 2000), Sanderson (2000) and practitioners at SCHIL (2000) can be summarized as: the procedures you will follow; the purpose of the test; any risks to the participant; the opportunity to ask questions; and, the opportunity to withdraw at any time. This principle of ‘information’ might be extended to what participants are told about the test when they are solicited to participate. They could for instance be sent a letter detailing in advance what will be recorded and what they will be doing.

2.3 Comprehension (P3) The facilitator needs to ensure that each participant understands what is involved in the test. This must be done in a manner that is clear. It must also be done so as to completely cover the information on the form. The procedure for obtaining consent should not be rushed, nor made to seem unimportant. The procedure is about the participant making an informed choice to proceed with the test and therefore they need to be allowed opportunity for questions. In a remote test this might be managed with a facilitator on site with the participant. Clearly one possible outcome of applying this principle is that the person involved may choose not to participate. However, not to permit such opportunities may adversely affect their ability to make an informed choice. Mackay (1995) says that participants may be naïve when it comes to video taping. The facilitator needs to ensure they understand the implications of giving permission to be video taped and how these recordings will be used (for instance, the SCHIL form in the appendix explicitly seeks permission for certain video related usages). She points out that in some tests the video camera is left on throughout the session recording everything that happens, whether directly associated with the test or not. She suggests a sign be used that lets participants know when the camera is on and when it is not. This reminds them about the use of the camera and also gives them an opportunity to step out of the view of the camera, such as in breaks in the test session.

2.4 Voluntariness (P4) Professional demeanor influences participant involvement. This has implications in remote testing in particular, where people not trained in usability may be called upon to perform various functions during the test, such as assume the role of facilitator. Participants should not be rushed, nor should facilitators fidget while the participant reads the form. Coercion and undue influence should be absent when the person is asked to give their consent to participate in the test. Undue pressure might come in a number of subtle ways that one needs to be wary of. For instance, if you are in a position of authority over the participant such as employer to employee or teacher to 5

student. Another subtle form of coercion is involved when participants receive payment for their participation. In the case of the latter it may be prudent to make the payment upfront, prior the test. That way the participant will not feel pressured to have to stay to the end of the test (see P5 about the right to leave the test at any time). A variation on this approach is that of Jarrett (2000b). She says: “If the participant has been offered a financial incentive as part of the recruitment process, I hand it over immediately before explaining that they can stop the test at any time without giving any reason. I felt that the knowledge that they hadn't received their incentive might inhibit them from leaving the test. If the participant is not aware of the incentive, then I leave it to the end.”

2.5 Participant’s rights (P5) Countries vary as to their recognition of human rights. Even where there is general agreement, definitions of those rights and interpretations of how they apply vary. Participants should have is the right to be informed as to what their rights are. Karat and Karat (1997) reviewed the codes of ethics of 30 national computer societies and found that they shared 5 major topic areas. The first on their list “Respect” addressed the need to respect the rights of people involved with the technology, if for no other reason than for the prestige of the profession. Dumas and Redish (1994), revealing a western bias, suggest rights most relevant to usability testing include the right to leave the test without penalty, the right to have a break at any time, the right to privacy (such as not having their names used in reporting the results of the test), the right to be informed as to the purpose of the test and the right to know before the test what they will be doing.

2.6 Nondisclosure (P6) When the product is under development or in any way confidential, participants need to be informed that they cannot talk about the product or their opinions of it. Dumas and Redish (1994) suggest giving participants appropriate wording that they can use to account for the time they spent in the usability test. Participants need to be informed about what they are permitted to divulge. There is a copy of an informed consent form specifically targeted at the principle of nondisclosure available at the site of the Usability Special Interest Group (2000).

2.7 Confidentiality (P7) Confidentiality is different from the participant’s right to privacy; it refers to how data about the participants will be stored. The ACS (2000) code stipulates that it is obligatory for members to preserve the confidentiality of others’ information. The ACM (2000) code has specific clauses on constraining access to certain types of data, and on organizational leadership to ensure confidentiality obligations are adhered to within organizations. In remote testing this can be extended to electronic data-logging over the internet. Mackay (1995) extends confidentiality also to who has access to video footage. Several usability professionals (Howard, 2000) gave anecdotal 6

illustrations of situations they had witnessed where video segments were shown in presentations unrelated to the original usability test recordings of participant behaviour. In one instance a participant in such a test was in the audience of such a breach of confidentiality – a monetary settlement stopped the pursuant legal action. The difficulty for remote testing is that, laws concerning privacy and confidentiality vary greatly. The legalities must be investigated in the context of where the test is taking place. For instance, addressing the Australian context Brankovic & EstivillCastro (1999), who define privacy as being to do with people and confidentiality being about data, say that the privacy act protects federal government data ownership only [apparently there are no laws currently governing private corporation ownership of data].

2.8 Waivers (P8) Permission needs to be obtained from participants to use materials such as questionnaires, audio and video recordings (and their transcripts). In many countries they have the right to refuse to give waivers. Participants should be given the option (see the example of the SCHIL form in the appendix) of having the data used for the purposes of the test, or of also having it used in a wider context. If the latter, then the consent form should state in what further ways the data will be used, so that an informed decision can be taken by the participant. Such permission should state the purposes for which the material will be used. Several usability engineers the author corresponded with in researching the material for this paper gave anecdotal stories of near court cases and in one case an out of court settlement where material obtained during a usability test was subsequently used in very different settings; such as sales promotion of the product and for training purposes. Clearly at a minimum permission should be sought to use the materials for purposes relating to the evaluation of the product being tested. The actual wording will depend on numerous circumstances, such as local legal requirements and company policies. The “Special release” section of the SCHIL consent form in the appendix shows one example of wording that could be used.

2.9 Legalese (P9) Miller (1998) says that in software engineering informed consent documents should include measures of software quality. Measures that should be documented include margins for error, significant digits and rounding protocols, and these should be presented in unambiguous language. This should also become a principle for usability testing. It is too tempting to have legal departments draft the consent form. Just as software engineering terminology and legal jargon can hinder the signing of forms, so in usability testing such language does not make for rapport building prior the start of a usability test. Sensitive use on non-legal jargon should be made so that comprehension (P3) on the part of the participant is possible.

2.10 Expectations (P10)

7

Globalization and related issues to do with international differences in cultural and ethnicity lead to the notion of expectations. Each social grouping has its own means of resolving issues of power and hierarchy, turn taking, how interactions between people proceed, who can interrupt and contradict. There are expected behaviors. There are accepted behaviors. Cultures interact through expectations. The implication for remote testing is that a level of familiarity with users that is acceptable in one culture for eliciting useful information may be deemed inappropriate in certain other cultures. For instance, privacy expectations vary which impacts the use of recoding media in a usability test. Khaslavsky (1998) argues that misunderstanding in communication is common between people of the same culture (which is one reason for P9). She says such misunderstandings are magnified when dealing with people cross-culturally. Misunderstandings arise due to differences in work practices and social class systems. One usability engineer associated with SCHIL (2000) told the author of his experiences in usability testing in Asia where participants often did not question the test facilitator in ways that a western participant frequently does, because of the perception of the facilitator as a person of authority. In his experience Asian expectations concerning to how one relates with people in authority can be quite different from similar relationships in the west. Yeo (1998) illustrates this with an example of a usability test conducted in Singapore, in which a participant broke down and cried. A post-test interview revealed that the participant’s behaviour was attributable to the Eastern culture in which it is not acceptable to criticize the designer openly, because it may cause the designer to lose face. Yeo also cites examples of gender expectations that differ between cultures. In some cultures it is simply not appropriate to pair a man and a woman in a co-discovery design scenario. These are important considerations for remote usability testing. Company policies based on existing informed consent principles do not accommodate this principle of expectations.

2.11 Absolutes So which of these principles are mandatory and which are discretionary? Informed consent is about protecting the rights of all parties involved in a usability test. The first 5 principles are concerned with the rights of participants, the next 4 are concerned with the rights of the company that has organised the testing to take place. The last applies to both. On an international scale, all these principles are in the “discretionary” category, none are “mandatory”. Given the heterogeneous nature of people involved in remote testing in particular, this is an inescapable reality. Yet in most Western societies, at least those that might be described as representing the European Anglo Celtic view, the first 4 principles fall into the “mandatory” category. That is, despite the differences in legal requirements, all these societies have value systems similar to each other. This is seen in the similarities of the codes of ethics of three of the major professional computing bodies in their respective countries, the British Computer Society, the Australian Computer Society and the Association for Computing Machinery (Burmeister, 2000). However, even P1 (minimal risk) which is legally required by in many western settings, has situation dependence. It is defined as minimal compared to

8

‘normal’ behaviour. But by this definition what is normal and hence of minimal risk to military personnel being tested on a new product may be considered very differently if one were testing primary school children of the same culture, even same city as the military personnel. Similar in situ arguments can be applied to each of these principles. Thus even the notion of mandatory and discretionary categories require contextual interpretation.

2.12 What procedures are or should be involved in obtaining informed consent? Informed consent is both a process and a formal record of the process. That formal record is typically a form, but may also be another type of recording, such as video. In the case of the form, it is required to be signed by each participant. In the case of other recording types, the consent given by the participant to proceed with the test must be recorded. The recording and associated process should observe the above principles need to be observed. In the appendix there is a template of an informed consent document as used by researchers and commercial clients of the SCHIL facilities. Informed consent is part of the larger issue of ethical experimentation. As such, because SCHIL is a research centre of Swinburne University of Technology, it must abide in its work by the ethical policies of the university. One implication of this for SCHIL is that it must comply with very stringent procedures in order to get permission to test participants in the usability laboratory (lab). For these reasons the SCHIL form (see appendix) and procedures serve as a source of best practice. However this discussion is also influenced by contributions of other usability engineers (Howard, 2000). Dumas and Redish (1994) suggest that an informed consent form needs to clearly state the rights of each party – the participant (P5) and the organisation conducting the test (P6, P7 and P8). The middle paragraphs of the form in the appendix give examples of this. Miller (2000) expresses the view that Informed consent is a “binary transaction”. By this he means that both parties should mutually accept it, or not. He does not encourage renegotiation informed consent at a later date, though says that allowance for this could be made. One instance where renegotiation might ensue is presented by Mackay (1995) when video tapes are to be used for purposes other than were originally agreed to with the participant. Then permissions need to be renegotiated. Another instance is discussed by Bentley (2000). Bentley suggests there are times when one is deliberately testing for participant awareness of certain features, in which case informing participants of these features to begin with is not appropriate. In such as situation a double consent procedure is one in which the true purpose of the test is revealed after the test and participants are asked to give their consent to this. If a participant refuses, then that person’s data should be destroyed. The process of completing the form should ensure that participants understand what the form says. This presents difficulties in some cases as the legalese required by the legal departments of companies conflicts with rapport building the test administrator will be engaging in. Finally the signing of the form by the participant should be witnessed. Dumas and Redish (1994, p. 206) go further saying that: “If you are videotaping the test, have the camera(s) on while you are going over the form. The videotape shows that the participant was properly informed and voluntarily signed the

9

form without pressure.” However, whilst one sees the intent, the process is a little back to front. They are effectively saying to record the participant before the participant has given permission for this to happen. One imagines that if the participant chooses not to sign the form, that such a recording then becomes illegal in some countries and appears to be a dubious practice. Obviously if the participant has refused to sign the form then any video footage of that participant should be destroyed. One European usability engineer with whom the author corresponded (Jarrett, 2000a) uses a variation of this approach. She video tapes the process of informed consent because it is less time consuming than filling in paper forms. The process of informed consent either begins on the arrival of the participant, by showing them the viewing room (if one exists) introducing observers, showing the equipment in the test room and generally building rapport. It may actually begin earlier (see P2 above) by sending them information about what will be expected of them ahead of their arrival. As stated above, one of the main things about informed consent is (P3) that the participant understands what s/he will experience in the test. Though the double consent procedure of Bentley (2000) is an exception to this. Neither the form nor the process ought to be vague about what the participant will experience. There should be a description given to the participant as to what the study is about, or if they are told then the facilitator ought to use a script so that each participant is informed about the same things in the same way. Generic forms are not sufficient. Therefore in the appendix the require the inclusion of study specific details, that at very least ought to include the name of the study and the people involved in the study. This is needed to be sure that the participant can make an informed decision (P2) about whether or not to participate. The consent form should state whether the data will be confidential (P7) or anonymous. If the participant has given permission for video footage to be used, anonymity can be enhanced by not video taping a person’s face and any other identifiable features. Even so the use of video often compromises principles of confidentiality (P7) and the rights of participants (P5) (see Mackay 1991 and Mackay 1995). In any case the participant ought to be asked whether they will permit the use of video. Some usability engineers like to extend this by placing a monitor easily visible to the participant where the s/he can check at any time what is being recorded about him/her and whether this is acceptable (Howard, 2000). Some participants will want to have a copy of the consent form, so provision for this eventuality should also be made. It might for instance be seen as one of the rights of a participant (P5). Finally as stated previously, the process of informed consent describes an attitude that begins when the facilitator greets the participant and continues until the participant leaves. This requires a profession, relaxed approach that is apparent to the participant right from the start.

3.0 Conclusion Underlying the above discussion are questions such as “who are we trying to protect?” The answer is “all parties”. We need to protect the rights of those who agree to participate in the test. In fact the informed consent procedure actually facilitates obtaining their cooperation when done properly, because it builds rapport between the participant and the facilitator, informs participants about what will happen and what 10

will be expected of them. It also tells them how their data and identity will be protected and in what contexts these will be used. The process also protects the company that has organised the test. Where the usability testing is carried out by third parties, they too are protected by this process. Some usability experts (Howard, 2000) warn that informed consent procedures do not remove the threat of legal action, but observing these principles can mitigate litigation. In addition, the process ensures that the company has rights as to what about their product (if anything) can be disclosed (P6) by participants, and about the contexts are in which they have permission to use the data obtained (P7) in their further work. Informed consent procedures raise the level of public trust in the process of the whole usability test. It is part of a quality process that is required to successfully bring a software product to market. For usability engineers to get honest and reliable feedback from participants, those people need to be able to trust the company and people administering the test. Informed consent procedures go a significant way towards ensuring that trust exists from before the start of the test. This way both the company conducting the test and the participant gain in the process. Even more so than with traditional usability studies, the heterogeneous nature of remote web testing of possibly “live” sites has the potential to affect one's perception of one's skills or aptitude. It is incumbent on HCI professionals to follow the ethical guideline: ‘A usability participant must leave a study feeling no worse when he/she arrived and should, if possible, leave feeling better than when he/she arrived.’ In concluding it is worth repeating that informed consent is an attitude that begins when the facilitator greets the participant and continues until the participant leaves.

4.0 References ACM (2000) Code of Ethics and Professional Conduct, Adopted by ACM Council 16/10/92, http://www.acm.org/constitution/code.html ACS (2000) Code of Ethics, http://www.acs.org.au/national/pospaper/acs131.htm APA (1997) Informed Consent, American Psychological Association's Ethics Committee statement, http://www.apa.org/ethics/stmnt01.html Bentley, T. (2000) Biasing Web Site User Evaluations: A Study, ‘to appear’, Proceedings of the Annual Conference of the Computer-Human Interaction Special Interest Group (CHISIG) of the Ergonomics Society of Australia, Sydney, Dec. Brankovic, L. & Estivill-Castro, V. (1999) Privacy Issues in Knowledge Discovery and Data Mining, Australian Institute for Computer Ethics Conference, Lilydale: Swinburne University of Technology, July, 89-99. Burmeister, O. K. (2000) Applying the ACS Code of Ethics, Journal of Research and Practice in Information Technology, 32(2), May, pp. 107-120.

11

Castillo, J. C., and Hartson, H. R. (1999) Remote Evaluation, http://miso.cs.vt.edu/~usab/remote/ Dumas, J. S. and Redish, J. C. (1994) A practical guide to usability testing, Norwood: Ablex Publishing Corporation. Gotterbarn, D., Miller, K. and Rogerson, S. (1999) Software engineering code of ethics is approved, Communications of the ACM, Vol. 42, No. 10, Oct., pp 102-107. Gotterbarn, D. (2000) Thou shalt follow thy Code of Ethics - It should not be an option!, Australian Institute of Computer Ethics Conversation, May, http://www.aice.swin.edu.au/events/conv/2000_05_05.html Hammontree, M., Weiler, P. and Nayak, N. (1994) Remote usability testing, ACM Interactions, Volume 1 , Issue 3, pp. 21-25. Howard, T. (2000) Informed Consent, a discussion thread in the March postings of ‘A professions, private Internet discussion group’, Moderator: Tharon Howard, http://people.clemson.edu/~tharon/ Jarrett, C. (2000a) personal communication, Friday, March 31st. Jarrett, C. (2000b) personal communication, Thursday, September 7th. Karat, J. and Karat, C. (1997) World-Wide CHI: Future Ethics, SIGCHI Bulletin, Vol. 29, No. 1 January. Khaslavsky, J. (1998) Integrating Culture into Interface Design, Proceedings of the conference on CHI 98 summary: human factors in computing systems, April, pp. 365-366. Mackay, W. E. (1991) Ethical issues in the use of video: Is it time to establish guidelines? CHI ’91 Conference Proceedings, Lousianna: ACM Press, April, pp. 403-405. Mackay, W. E. (1995) Ethics, Lies and Videotape…, CHI ’95 Conference Proceedings, ACM, http://www.acm.org/sigchi/chi95/proceedings/papers/wem1bdy.htm McKenzie, S. (2000) Child Safety on the Internet, Master of Arts thesis, Department of Criminology, University of Melbourne, Melbourne. Miller, K. (1998) Software informed consent: docete emptorem, not caveat emptor, Science and Engineering Ethics, Vol. 4, No. 3, July, pp. 357-362. Miller, K. (1999) The Future Looks Dim: Building the Information Society with Shoddy Materials, ETHICOMP International Conference on the Social and Ethical Impacts of Information and Communication Technologies, October 68, Rome, Italy. Miller, K. (2000) personal communication, Saturday, August 12th. Nielsen, J. (1996a) International Web Usability, The Alertbox: Current Issues in Web Usability, Aug, http://www.useit.com/alertbox/ Nielsen, J. (1996b) International Web Testing, Papers and Essays by Jakob Nielsen, http://www.useit.com/papers/ Sanderson, P. (2000) Lecture on usability testing, Swinburne University Of Technology, April 18th. SCHIL (2000) Swinburne Computer Human Interaction Laboratory, http://www.it.swin.edu.au/schil/ Usability Special Interest Group (2000) Example of a non-disclosure form, http://stc.org/pics/usability/resources/index.html Yeo, A. (1998) Cultural Effects in Usability Assessment, Doctoral Consortium, Proceedings of the conference on CHI 98 summary: human factors in computing systems, April, pp. 71-75.

12

6.0 Appendix The following informed consent form has been provided with permission from SCHIL (2000). For an alternate (downloadable) look at an informed consent form, specifically targeted at the principle of nondisclosure (P6) see the site of the Usability Special Interest Group (2000). Another example of a consent form relating to the use of the internet with minors can be found in appendix C of McKenzie’s (2000, pp. 157-159) Master of Arts thesis. Though is consent form is not designed for a usability testing context, much of the content is relevant and can be adapted for the purposes of a usability test.

13

INFORMED CONSENT Participant’s Name: __________________________________________________ Address: ____________________________________________________________ _____________________________________________________________ Email/Phone: _________________________________________________________ I have been provided with information about the procedure for the study and I am happy to take part. The handling of my data from this study has been explained to me. No notes or logs will bear any information by which I might be identified. In addition, unless I agree to the Special Release below, any video or audio collected during the usability study will be viewed only by . All audio and video material will be analysed and archived under lock and key within the SCHIL Research Centre and Usability Laboratory and its confidentiality will be maintained. I understand that I can withdraw from the study at any point without prejudice or penalty of any kind. A responsible SCHIL staff member will be in attendance or available nearby during my session. Special Release: please tick one I would / would not be happy for the SCHIL Research Centre and Usability Laboratory to use small excerpts of video from my session for educational and promotional purposes related to future uses of the SCHIL usability laboratory. No excerpts would be shown that could be construed as unflattering or embarrassing for me. Signature:______________________________________________ Date: ___________ Experimenter’s name: ____________________________________ Date: __________ Experimenter’s signature: _________________________________ Responsible SCHIL staff member:

14