Web Usability Testing

2 downloads 0 Views 341KB Size Report
Jul 14, 2014 - and Business Informatics, Lenaug.2, A-1080 Vienna, Austria. Abstract. In this paper we argue in favor of usability considerations and usability ...
Web Usability Testing

1 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

A case study of usability testing of chosen sites (banks, daily newspapers, insurances) Christian Osterbauer ([email protected]), Monika Köhle ([email protected]), Thomas Grechenig ([email protected]), Vienna University of Technology, Institute for Software Technology, Favoritenstr.11/188, A-1040 Vienna, Austria Manfred Tscheligi ([email protected]), Vienna University, Institute of Computer Science and Business Informatics, Lenaug.2, A-1080 Vienna, Austria

Abstract In this paper we argue in favor of usability considerations and usability testing in general and in the case of the WWW in particular. Based on related work in the field of WWW usability testing, we present a usability evaluation of selected Austrian web sites of the three areas of industry: banks, daily newspapers, and insurances. Accordingly, methods like scenario-based testing were applied in this empirical web usability study. We present the results for the two criteria, navigation and graphics, and reveal general tendencies.

Keywords usability, web usability, usability testing, design mistakes, scenario-based testing

Introduction As the potentialities of the web continue to expand (e.g. HTML extensions, VRML, Java) site designers are becoming overwhelmed by a proliferation of functioning interaction techniques. However, when a technology is possible, it does not implicate that this technology is also desirable, nor that it is being incorporated in a productive manner. Web site usability affects millions of users on a daily basis. Most users have low tolerance for anything that either did not work, that was too complicated, or that they did not like. In the case of other user interfaces than WWW, the technically oriented user would normally persist for some time in trying to figure out how to use the system. But in the WWW the user would not want to visit a site again after a quite small number of problems, because there are so many sites out there that users generally have very little patience. Therefore first impressions are important, especially for organizations selling products on the net. Many web sites have become cluttered with useless and confusing new features. These observations lead to the assumption that the demands for good usability are probably higher for WWW user interfaces than for other user interfaces. The remainder of this paper is organized as follows: Section 2 introduces usability of interactive systems in general and testing of usability aspects. In section 3 we present a brief overview of related work in the field of WWW usability testing. Section 4 outlines two major methods for usability testing. This is followed by a case-study from selected web sites including scenario-based testing of sites, results and a summary. Finally, in section 6 we present general results and conclusions.

14.07.2014 13:59

Web Usability Testing

2 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Usability and its Testing

Usability of interactive systems in general Generally speaking, usability can be defined as "a measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and attitude of its users towards it." (Preece et al., 1994) Usability also means bringing the usage perspective into focus. The best way to ensure usability is to treat human factors as an input to design, rather than merely evaluating prototypes or design documentation. Usability is successful when a strategy is developed which leads to key usability benefits. All these aspects also apply to the WWW. Nowadays people are faced with an increasing number of possibilities and with an enormous level of complexity. So, even simple systems should be designed with usability in mind. While an accepted definition of the term usability is still lacking, there are still many different approaches to making a product usable. Reed (1992) advocates maxims of usability for software developers. These maxims are equally valid for the web: (a) design for the software end user, not for the designers/clients; (b) test the multimedia software, not the user; (c) test usability with real users early and often; (d) don't test everything at once; (e) measure performance of real-world tasks with software, not functionality of the program; and (f) test usability problems that software designers never imagined. In many organizations usability was or still is ignored because there are no objective criteria for usability when developing and procuring products. Then again, many people and organizations now recognize the need for usability in interactive systems, and the benefits that usable systems deliver.

Testing of Usability aspects Although usability has already gained wide acceptance in software development practices, almost nobody is spending sufficient time to make actual usability testing with the intrinsic user. The aim of usability testing is not to solve problems, or to enable a quantitative assessment of usability (Patterson, 1994). It provides a means of identifying problem areas, and the extracting of information concerning problems, difficulties, weaknesses and areas for improvement. Even if usability testing should reveal difficulties or faults that cannot be corrected in the model under development, the information is still important for the designers in planning for the future release of a product (Chapanis, 1991; Dieli, 1989). Usability testing may serve a number of different purposes: to improve an existing product; to compare two or more products; to measure a system against a standard or a set of guidelines (Lindgaard, 1994). According to Reed (1992) and Skelton (1992) usability testing determines whether a system meets a pre-defined, quantifiable level of usability for specific types of user carrying out specific tasks. Traditionally, software products including information materials and multimedia software have been evaluated by means of marketplace reviews, magazine reviews, and beta tests, but these approaches leave too little time for major modifications and improvement of products. As the process of observing and collecting data from users while they interact with a system, usability testing can be used to address a system's usability problems before it goes into production.

14.07.2014 13:59

Web Usability Testing

3 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

To design a user interface (or any system) to meet the needs of its users, developers must understand what tasks users will use a system for and how those tasks will be performed (Diaper, 1989). Tasks play a vital role in HCI. An understanding of tasks that users will perform gives developers an insight into the functionality which should be provided and how it will be used (Potosnak 1989). Because tasks at all but the highest levels of abstraction involve manipulation of user interface objects (e.g. icons, menus, buttons), tasks and objects must be considered together in design (Carroll et al., 1991). The WWW is a incredibly fast-growing application domain and it brings new kinds of usability challenges. One of the exciting prospects for web site evaluators is the potential for using the web site itself to collect and analyze evaluative data.

Related Work in the Field of WWW Usability Testing According to Alpar (1999) the measurement of user satisfaction with a web site is an evaluation exercise. Since evaluation of web sites can be done from many different perspectives and in many different ways a variety of efforts have been undertaken in this area. The most simple type of evaluation of web sites is found in many computing journals and even in some general interest publications: it is the evaluation by individual "experts" who are often journalists. These experts give a short description of a web site and rate it by assigning it an overall value. The value is expressed in number of stars, number of flies, coloring, with an arrow or thumb, or in similar way. In some cases the sites are evaluated using a few criteria rather than assigning them just a single value (Mouty, 1999) [HREF 1]. Usability studies judge quality by measuring the performance of a document, in actual use, against conventional figures of merit: How quickly can readers find facts in the hypertext? Do readers report liking or disliking their encounter with the web site? Trochim (1996) [HREF 2] describes various approaches to the evaluation of web sites (including standard server log analysis). The evaluation is based on the idea that a web site evolves through the phases of conceptualization of the content domain, the development of the content, implementation, and evaluation and that evaluation should take place in each of these phases. Several evaluation questions are proposed for each phase (Mouty, 1999) [HREF 1]. Nielsen (1996) [HREF 3] gives an overview of the top ten mistakes made in web design in 1996. In Nielsen (1999) [HREF 4] we can see that these mistakes still exist. In his survey he examined twenty prominent sites. Table 1 shows the percentage of occurrence for each of the ten most common mistakes. On the average, a single mistake occurred on 16% of the tested sites. Design Mistake & Violation Score Slow download times

84%

Non-standard link colors

17%

Long scrolling navigation pages

15%

Scrolling text or looping animation

12%

Frames

11%

Orphan pages

10%

Bleeding-edge technology

7%

Complex URLs

6%

Lack of navigation support

4%

Outdated information

1%

14.07.2014 13:59

Web Usability Testing

4 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Table 1: Design mistakes on the WWW according to Nielsen (1999) Instone (1997) [HREF 5] applied usability engineering practices to the web. The goal was to raise awareness for usability by means of considering the users, the tasks they are trying to accomplish, and the context of their usage. In his example, users tested a travel site. He came up with the following procedure for designing usable web sites: 1. Know your Purpose: Why do people come to your web site? What tasks might they try to perform? 2. Find Ordinary Users: Once you've decided on a task, find people in your target audience who have not been to your site yet and invite them, one at a time, to your business. 3. Watch & Learn: Sit back and watch the users as they try to perform the tasks you've asked them to. The key here is to sit quietly and watch. 4. Collect the Data: As you watch your users, one of two things will happen. They may have no problem accomplishing the tasks you have set out for them. Take notes when you see them do something you don't understand or when you see them head off in the wrong direction, especially if you see more than one user do this. 5. Back to the Drawing Board: Last, but certainly not least, use what you've learned to improve your site. In their research report Spool (1997) describe how well or how poorly some information-rich web sites actually work when people use them to find specific answers. They employed several usability testing methods when testing nine different web sites. All tested sites, while obviously trying to sell products, also provide information. They wanted to learn how effortless it was for users to answer questions on these sites. To do this, a usability method called "Scavenger Hunt" was set up. This test used four types of questions to study the ease of finding information on web sites. The participants was asked this types of questions at each site. To answer the questions it was not necessary to leave the site. After more than 50 tests, the sites were compared to each other. There were five conclusions: Conclusions after comparing sites Graphic design neither helps nor hurts Text links are vital Navigation and content are inseparable Information Retrieval is different than surfing Web sites aren't like software

Table 2: Conclusions after testing the sites according to Spool (1997)

Two Main Methods for Testing Usability

Checklists According to (Sullivan 1996) [HREF 6] there is a basic method for employing a checklist-based user test. 1. Preliminary Self-Appraisal: No author can view his or her own work with dispassion. Still, there are certain things that inevitably make for an unfriendly web page. You can save considerable time, both for yourself and for your evaluators, if you start with a basic sweep of your site for known usability problems. View this self-appraisal as a preliminary step, however, and not as a substitute for user testing methods. 2. Provide checklists to your testers: The more independent and autonomous your testers are, the more valuable the feedback they can provide. A topical site will probably want to enlist the aid of volunteer testers with some interest in the subject of the site. Corporate sites should strongly consider using agency-based temporary employees for user testing. 3. Provide some brief instructions: Understand that your evaluators will naturally assume that the problems they encounter in

14.07.2014 13:59

Web Usability Testing

5 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

using your site are the result of some fault on their part, rather than a flaw in the design of the site itself. It is therefore vitally important for you to explain to your testers that you need them to make note of any problems they encounter, regardless of what they believe the underlying cause to be. 4. Leave: In formal usability experiments, the experimenter typically remains in the room to observe and record testers' behavior. But unless you are a trained usability professional, your presence will more likely than not serve to inhibit your evaluators, and thus compromise their ability to test your site.

Scenario-Based Testing of sites This method aims at an evaluation of functionality and navigation extending over several web pages. In that kind of testing, typical task scenarios are developed, and actual users evaluate the system's usability through performing those tasks. According to Levi (1996) [HREF 7], a scenario-based usability test involves presenting representative end-users with scenarios, or specific tasks, designed to cover the major functionality of the software system and to simulate expected real-life usage patterns. Such scenarios should be formulated by knowledgeable task experts in consultation with the system designers. Results are then tabulated using such measures as whether the participants correctly accomplished the tasks, the time taken for each task, and the number of pages accessed for each task. During the test the testers are available to assist participants if they get stuck, but such assistance is recorded as a task failure. In all types of usability testing, one thing must be kept in mind. Namely the software, not the participant, is being tested. Participants must be told in advance that they are indeed participants, not subjects. Because the point of usability testing is to determine where the product's design fails. Time plays an essential role in this part. Especially in the case of web sites time is precious since many internet users are working from home. A considerable advantage of empirical end-user testing is the recording of the evaluation with video, so that the results become incontrovertible. Unlike heuristic evaluation, where HCI experts speculate as to what may cause users' difficulties, an end-user test highlights where users actually do have difficulties. On the one hand you can record the test person herself to document the reactions of the person and her understanding of the preparation of the information. On the other hand it is possible to record the screen and the keyboard actions. To do this it is necessary to get the approval of the test person. Video evaluation makes it easier to appraise the results because you can repeat one scene many times afterwards. The video tape can then be reviewed after the testing session to get a clearer picture of the users' actions. This way of end-user testing lends very well to an iterative test/fix/retest cycle. A possible recording variant additional to the video recording while performing a test may be: the time a participant needs to finish a task the right solution (answer) to the task number of visited pages to complete the task

The analysis of the results should be performed systematically. Incorporating the above mentioned additional criteria clear answers should be obtained to the questions "How well could the tasks be solved?" and "Where did problems occur?".

Case-Study from Selected Web Sites

14.07.2014 13:59

Web Usability Testing

6 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

So far we came closer to the term of usability and reported on related work done in the field of usability testing. Also, we introduced two major methods for testing usability. In Osterbauer (1998) we presented the development and employment of a checklist to Austrian web sites. In the following we take a look at our empirical study where we applied scenario-based testing to selected web sites.

Empirical Study The first part of our empirical usability study is scenario-based testing. During the second part the participant's comprehension of the hierarchical structure of several related web pages is tested. Finally, the third part consists of a post test questionnaire. The usability of highly structured information like a hypertext system fundamentally depends on the ability of the user to develop a cognizant and comprehensible image of the structure of the system. Because most web sites consist of menu driven navigation features and hyperlinks, the necessity to develop a cognitive structure of hypertext representations clearly arises. First, we selected five web sites of each of the areas banks, daily newspapers and insurances, resulting in fifteen different test sites. Then we formed five groups by picking one site of every area per group, i.e. each group contained a bank site, a news paper site, and an insurance site. Each test person tested an entire group of sites, while each group was tested by seven persons. So we had 35 test persons in total. The sites differed from their structure, size, and complexity. The web experiences of the participants ranged from novice to expert user. The sites offered rather general information for a broad spectrum of users. Therefore the target audience was covered with these 35 test persons. To prevent any browser specific navigation the browser navigation features (e.g. back button) were disabled. The participants were instructed only to use navigation features within the web site and not to use browser navigation. The participants were asked to perform three different tasks at each site. These tasks took an average performing time of three minutes. Every task started from a different level of hierarchy in the web site. All actions of the test persons were video taped. The test persons got a brief introduction to the test. When done with a task the test persons were encouraged to utter the solution of the task loudly and clearly. After finishing all three requested tasks the test persons were asked to graph a kind of map of the site's hierarchical structure solely out of their memory. The intention was to trace the cognitive abilities to reconstruct different levels of the structure. After having tested all three sites the participants were asked to answer a questionnaire. This questionnaire included questions about the navigation and other features of the sites. Some questions could be answered in a rating scale, other questions prompted for verbalizing their thoughts and feelings about the sites. Finally, there was an interview with each person to articulate all the things which could not be mentioned in the questionnaire e.g. some ideas for improvements of the sites. The first step in evaluating the data was the analysis of the video tapes. The focus here was lying on the general understanding of the tasks and to see whether the test persons could figure out how to complete the tasks. The questionnaire was analyzed statistically. To begin with, all errors were marked and subsequently categorized in two ways. One category consisted of erroneously depicting the

14.07.2014 13:59

Web Usability Testing

7 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

overall hierarchy. The other category captured misplaced parts of the hierarchy. These categories were further divided into smaller groups, like missing branches in the hierarchy or permuted hierarchies in the first category and missing sites, missing links, or incorrect links in the second category. Then the number of errors for every single graph was determined. To identify problem areas, the mean and standard deviation for the categories were calculated. These results were then contrasted with the other results of the test.

Results The following analysis is restricted to only two of the tested criteria in the questionnaire: navigation and graphics. It shows the tested sites subsumed to areas of industry. For evaluation purposes we suggested scales ranging from 1 to 7 to answer the questions. In the following tables "1" signifies the best rating, "7" the worst rating. Neutral means that there is no preference, either positive or negative. Navigation

First, screen shots of bank D and newspaper A are shown in figure 1 and 2, respectively. Both of them are a positive example for navigation (according to the test persons' rating).

Figure 1: screenshot of bank D.

14.07.2014 13:59

Web Usability Testing

8 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Figure 2: screenshot of newspaper A. We consider the question "How do you judge the navigation within the web sites?", where "1" indicates simple and "7" indicates complicated. Navigation of Banks

Generally speaking, navigation within the area of banks was more pleasant when there was less information on one page and there were no frames. In the main, sites which abstained from frames achieved a better result. The site of bank A did not use frames, yet there was too much text on the pages. So the pages were overloaded with information. Another negative example is the site of bank E where most of the participants considered navigation complicated. Bank C did use frames, but the site had clearly structured information. So in actual fact navigation was improved. The site of bank D felt predominantly easy as on the first pages there was comparably little information, while more detailed information turned up on going deeper into the hierarchy. Bank D provided a navigation bar at the top and at the bottom of a page. This too contributed to a better feeling regarding navigation. The site of bank B used in-line frames. This technique is better than origin frames and was valued rather positive.

14.07.2014 13:59

Web Usability Testing

9 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Figure 3: Rating of the navigation of banks A to E: Rating "1" indicates simple navigation whereas rating "7" indicates complicated navigation. Navigation of newspapers

The sites of the newspapers showed several kinds of navigation. So it was not possible to directly compare them to each other. The techniques reached from simple links and frames to navigation bars. There was no clear preference for a certain kind of navigation. The opinions of the test persons were widespread.

Figure 4: Rating of the navigation of newspapers A to E: Rating "1" indicates simple navigation whereas rating "7" indicates complicated navigation. Navigation of insurances

As well as the newspapers, the insurances showed quite a variety in navigation. One of the

14.07.2014 13:59

Web Usability Testing

10 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

best strategies was exhibited by insurance A where tabs were employed to navigate within the site. The site of insurance B also demonstrated a good navigation strategy employing a navigation bar.

Figure 5: Rating of the navigation of insurances A to E: Rating "1" indicates simple navigation whereas rating "7" indicates complicated navigation. Graphics

First, a screen shot of insurance D is shown in figure 3. It represents a negative example for colors (according to the test persons? rating).

14.07.2014 13:59

Web Usability Testing

11 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Figure 6: screenshot of insurance D. Graphics was an interesting criterion. The participants were asked: "Was the arrangement of the graphics clear?". The answers ranged from "1" indicating clearly arranged to "7" indicating badly arranged. Graphics of banks

Concerning banks, they are generally expected to make a trustworthy impression: the more venerable the graphics and colors, the better the judgment of the test persons. This is a good example of the importance of the first impression of a web site. Well adapted and clearly arranged graphics from different domains were found with the sites from bank C and bank D. A negative example is bank E whose site had evidently too many graphics.

Figure 7: Results from evaluating graphics of banks Graphics of newspapers

Concerning the adjustment of graphics of the daily newspapers, the site from the newspaper E gives a positive example. Here the choice of less concise graphics was an advantage. The sites of newspaper A and newspaper D also showed tendencies in that direction. Pictures and graphics are crucial to printed newspapers. In the web however, the immense number of pictures was not rewarded by the users. This is partly due to the extended loading time.

14.07.2014 13:59

Web Usability Testing

12 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Figure 8: Results from evaluating graphics of daily newspapers Graphics of insurances

Concerning the insurances, we encountered a quite contrasting assessment of insurance D's site. Not only the graphics but also the colors used for this site seems horrible to usability experts. Contrary to that, the test persons apparently liked it. Obviously, experts are not always right with their recommendations. The site of insurance E most closely fits usability criteria. This site was also judged clear.

Figure 9: Results from evaluating graphics of insurances

General Results and Conclusions After having analyzed single sites of the three areas, we were interested in usability mistakes which all tested areas have in common. We looked at tendencies that participants found during testing the sites: One of the most crucial issues found is the reference on the clarity of the text. Often web sites use domain specific vocabulary (like banks or insurances). This mainly confused users who were just looking for basic information about products or services. So, sites with a strong incline towards domain specific terms had

14.07.2014 13:59

Web Usability Testing

13 von 14

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

evidently lower usability than others. Particularly banks and insurances are to be generally comprehensible to the average user by their nature. Another issue, predominantly found with newspapers, is too much advertising in the shape of animations. These animations distract the user from the actual information or service of the site. If the feeling about the advertisement is negative, the site is no longer serving the purpose. The user may be inclined to change to a different web site. Yet another design mistake is excessive amounts of text on one page. Thus, the user is forced to endure loads of irrelevant information, sometimes never reaching his target. As web users generally exhibit little patience they tend to avoid those sites. Some test persons suggested, after testing the sites, to subdivide the pages. Concerning colors, in the case of banks users prefer a more venerable appearance. For the sites of newspapers and insurances, there was a tendency towards slightly more bright and intensive colors. Web sites with rather good ratings in other domains, when using more dignified colors like the banks had bad ratings for the colors. Except for the one example where the site was judged good in spite of bad design (according to usability theory), all too bright colors are not very well rated. For example, one of the tested sites had orange background color and yellow text color which elicited mixed reactions from the users. The usage of frames is generally not very advantageous but it is becoming more and more accepted by users. This leads to the assumption, that frames do not actually disturb the user. The application of a navigation bar is very well accepted by users. Some sites offered a search tool to seek relevant information within the site. This points out a new trend, too.

As a general result of this study, we want to emphasize the following: Testing with real users is the most effective way to make a system usable, however it requires much time and preparation to conduct such a test e.g. try out a variety of tasks, control the testing environment, use measurement instruments (videotaping, recording keystrokes, etc.), or combine the testing with other methods of data collection, such as interviews of users. Designing for usability means understanding user tasks. Task analysis is essential for good design. Unfortunately, it is often ignored or given only minimal attention also in web design. Another crucial point in designing for usability comes up with user expectations. It is vital to keep in mind what target group the potential users are and what kind of expectations they bear in mind. Typically web sites are developed for user which have at least minimal expectations of the used web site which entice a certain strategy how to handle the actual web site. Checklists are very helpful for the inspection of usability principles. They are often applied according to heuristic evaluation. By means of these checklists it is possible to check isolated usability criteria for their correct implementation. The power of expression of thus obtained results is quite accurate even with a small number of testers. As a great advantage of using checklists, tests can be applied in every phase of the design, due to the possibility of adjusting the compilation of a checklist precisely for the needs of the site under test. The limitation of the checklist to a single web page only comes up as a major disadvantage. Every web page needs its own checklist which cannot be put in context. For testing several related web pages, e.g. a complex system of pages like a hierarchical structure, an empirical test like scenario-based testing is most often the best choice.

References Alpar, P. (1999). Satisfaction with a Web Site: Its Measurement, Factors and Correlates. In A.-W. Scheer & M. Nüttgens (Hrsg.), Electronic Business Engineering /4. Internationale Tagung Wirtschaftsinformatik. Heidelberg: Physica Verlag. Carroll, J. M., Kellogg, W. A., and Rosson, M. B. (1991). The Task-Artifact Cycle. In J. M. Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface. Cambridge, England: Cambridge University Press, 74-102. Chapanis, A. (1991). Evaluating usability. In: B. Shackel & S. J. Richardson (Eds.), Human factors for informatics usability, Cambridge: Cambridge University, 359-395. Diaper, D. (Ed.) (1989). Task Analysis for Human-Computer Interaction. Chichester, England: Ellis Horwood Limited Lindgaard, G. (1994). Usability testing and system evaluation, London: Chapman & Hall.

14.07.2014 13:59

Web Usability Testing

http://ausweb.scu.edu.au/aw2k/papers/osterbauer/paper.html

Nielsen, J. (1994). Usability Engineering, published by AP Professional, Boston, MA, 1994 (slightly expanded paperback edition). Osterbauer, C. (1998). Usability im World Wide Web. Master thesis at the University of Vienna (in German). Patterson, G. (1994). A method of evaluating the usability of a prototype user interface for CBT courseware. In: M. D. Brouwer-Janse, & T. L. Harrington (Eds.), Human-machine communication for educational systems design, Berlin: Springer-Verlag, 291-298. Potosnak, K. (1989). When a usability is not the answer. IEEE Software, 6(4):105-106 Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S. & Carey, T. (1994). Human-computer interaction, Workingham. England: Addison-Wesley. Reed, S. (1992). Who defines usability? You do! PC/Computing, 5(12), 220-221, 223-224, 227-228, 230, 232. Skelton, T. M. (1992). Testing the usability of usability testing. Technical Communication, 39(3), 343-359. Spool J. M., Scanlon,T. , Schroeder, W., Snyder, DeAngelo, C. T. (1997) User Interface Engineering 1997, Web Site Usability: A Designer´s Guide

Hypertext References HREF 1 http://www.uned-uk.org/toolkits/interfacedesign/interfacedesign.htm HREF 2 http://trochim.human.cornell.edu/webeval/webeval.htm HREF 3 http://www.useit.com/alertbox/9605.html HREF 4 http://www.useit.com/alertbox/990516.html HREF 5 http://webreview.com/wr/pub/97/04/25/usability/index.html HREF 6 http://www.pantos.org/atw/35317.html HREF 7 http://www.dcc.unicamp.br/~buzato/mc750/webUsability /stat.1996.levi_michael.The_Paper.html [ Proceedings ] AusWeb2K, the Sixth Australian World Wide Web Conference, Rihga Colonial Club Resort, Cairns, 12-17 June 2000 Contact: Norsearch Conference Services +61 2 66 20 3932 (from outside Australia) (02) 6620 3932 (from inside Australia) Fax (02) 6622 1954

14 von 14

14.07.2014 13:59