Scenario Advisor Tool for Requirements Engineering - CiteSeerX

10 downloads 8789 Views 469KB Size Report
sessions, we found that the scenario advisor tool helped users to write more sound scenarios without any domain knowledge, and to generate more variations on existing ...... Check the scenario generation hints for any components of.
Scenario Advisor Tool for Requirements Engineering Jae Eun Shin, Alistair G. Sutcliffe, Andreas Gregoriades Centre for HCI Design, School of Informatics, University of Manchester P.O. Box 80, M60 1QD, U.K. tel +44 (0)161 200-3315; fax +44 (0)161 200-3324 [email protected]

Abstract This study investigates the usefulness of a scenario advisor tool which was built to help requirements engineers to generate sufficient sets of scenarios in the domain of sociotechnical systems. The tool provides traceability between scenario models and requirements and helps to generate new scenarios and scenario variations. Through two series of evaluation sessions, we found that the scenario advisor tool helped users to write more sound scenarios without any domain knowledge, and to generate more variations on existing scenarios by providing specific scenario generation hints for each scenario component. The tool should improve the reliability of requirements elicitation and validation. Keywords: scenario generation; scenario schema; requirements elicitation; requirements

validation; traceability

1. Introduction The scenario-based approach to RE has become popular among requirements engineers as a means of eliciting and validating requirements. Carroll [1, 2, 3] focused on the importance of using scenarios as a user-oriented approach to systems design. Rolland et al. [4] explored the issues underlying scenario-based design and proposed a 4-dimensional framework which advocates that a scenario-based approach can be defined by its form, contents, purpose, and life cycle. Sutcliffe et al. [5] developed a method and software assistant tool for scenariobased RE which expanded pathways through a use case as well as suggesting alternative scenarios from a limited taxonomy of failure types. Although this tool did take environmental influences into account when generating alternative scenarios, it generated too many possible scenarios. Scenarios play a role as mediator between design artefacts (i.e. the requirements specification, the business model, and the user interface prototype) and software engineers [6]. Carroll [7] explained the scenario concept as projecting “a concrete narrative description of activity that the user engages in when performing a specific task”. On the other hand, Zhu and Jin [8] pointed out that a complicated problem can be decomposed systematically and naturally by identifying a set of scenarios to elicit information separately. Haumer et al. [9] used scenarios to identify goals that systems should achieve in terms of fitness for users. From analysing scenario-based projects in industry, Weidenhaupt et al. [10] pointed out that writing reliable scenarios requires experts’ domain knowledge. Cunning and Rozenblit [11] also questioned how reliable scenarios could be generated, and developed a method for generating test scenarios from a structured requirements specification since they noted that “incomplete, ambiguous, incompatible and incomprehensible requirements lead to poor designs”. Their method, however, generates many possible event scenarios, as a tree structure,

2

based only on the specification and supplied constraints, and it does not take environment or context into account. Potts [12] developed a method for inquiry-driven requirements determination (ScenIC) based on an analogy with human memory. In ScenIC, scenarios are used to explore possibilities for allocations of tasks to actors, and requirements for obstacle resolution. Sutcliffe and Ryan [13] developed a SCenario Requirements Analysis Method (SCRAM) which recommended a combination of concept demonstrators, scenarios and design rationale. The SCRAM method uses scenarios with a prototype artefact in order to help users relate the design to their work/task context and discover requirements defects by interactive walkthrough. A key problem for scenario-based RE is to collect or generate a necessary and sufficient set of scenarios for test coverage. However, there are few rigorous methods or tools to generate sufficient sets of scenarios for RE. The objective of this study, therefore, is to build a scenario advisor tool which provides help in generating a sufficient set of scenarios for the design of socio-technical systems while avoiding the overload problem of previous tools which generated too many scenario variations. Our focus on socio-technical systems is motivated by systems engineering domains and safety critical systems in which operational problems frequently have causes in the environment [14, 15]. In the remainder of this paper, we describe the scenario advisor tool, and how the tool is evaluated to investigate its usefulness. In section 2, we explain the scenario schema and the advisor tool with its main features (annotation, traceability and scenario generation help). In section 3, we outline two series of experiments with the details of evaluation methods and experimental tasks. The results from the experiments are summarised in section 4. In section 5, we conclude this study and discuss the findings and future work.

2. Scenario Generation Process and the Advisor Tool The process commences by assuming an initial seed scenario has been elicited from users. The objective of the process, and tool support, is to elaborate the initial seed scenario into a large number of variations to cover different circumstances and states that might be suggested from the initial starting point. The approach is user driven since our previous work on automated generation of scenario permutations [5] demonstrated that users could be overwhelmed with too many variations. Hence we wished to provided the facilities for suggesting variations, but leaving to the user the choice about how many and where in a scenario the variations should be. Our views on scenarios and their use for requirements elicitation and validation are similar to those of Leite et al. [16], who developed a meta-model, the Language Extended Lexicon (LEL) to help elicitation and representation of the language used in scenarios and use cases. In LEL, scenarios are used to understand the application and its functionality: each

3

scenario describes a specific situation of the application, centring attention on user and system behaviour. Their scenario model is a structure composed of title, goal, context, resources, actors, episodes and exceptions, and attribute constraints. We developed a knowledge representation schema for scenarios, and hypertext technology to navigate through the scenario schema. The motivation for building a hypertext tool for the schema comes from Kaindl’s RETH (Requirements Engineering Through Hypertext) method which provides users with a combined model of requirements and domain objects in a hypertext structure [17]. In RETH, the use of hypertext technology allows the user to integrate natural language text into the combined object oriented model. We first developed a knowledge representation schema for scenarios, then built a scenario annotation editor tool for marking up scenario narratives with schema components. We added a traceability feature to the tool to allow users to trace model components from segments in scenario narratives. The scenario advisor tool consists of a scenario schema and a scenario annotation editor. The tool has three main features: (1) annotation for marking up selected scenario narratives, (2) support for traceability between scenario narratives and model components, and (3) help for scenario generation. The conceptual architecture of the tool is illustrated in figure 1. The process of generating scenarios starts with users selecting a scenario narrative from the database and then annotating scenario segments using mark-up labels from the schema. This allows the tool to invoke rules to suggest variations according to properties of different components or relationships between components; however, rather than automatically suggesting a vast number of possible variations, we adopted a browsing approach so the user can access suggestions on demand. The user can point to components (words/phrases) in the scenario narrative, see which schema component they relate to and then get suggestions for variations pertinent to the specific, highlighted point in the scenario. 2.1. Scenario Schema Tool The scenario schema was built upon existing conceptual modelling languages (UML) and i* [18, 19], established in RE. We identified scenario components based on relevant literature [2, 5, 18, 19, 20, 21] and investigation of many scenario examples and then categorised the components into five areas: Actor, Intention, Task, Environment and Communications. Our schema contains concepts that are familiar in i* ontology (e.g. agents, tasks, goals), but we have extended the i* model with new constructs for modelling the system environment (e.g. time, location) and for argument and communication (e.g. assumption, attitude). Scenario narratives often contain opinions and arguments explaining aspects of the modelled world; thus, we modelled not only the domain context but also argument in a scenario. In our taxonomy, each area is defined as follows:

4



Actor area: describes stakeholders, their groups or organisation in a work process, including machine agents; i.e., activity, agent, attribute, group, organisation, physical structure, and role.



Intention area: describes stakeholders’ intentions in performing tasks; i.e., goal, objective, plan, and policy.



Task area: describes work to be done or actions undertaken by agents to achieve goals or objectives with associated objects; i.e., action, event, object, procedure, resource, state, and task.



Environment area: describes the work environment, and context of the domain; i.e., social environment, economic environment, physical environment, location, situation, and time.



Communication area: describes arguments and attitudes affecting the achievement of goals or objectives; i.e., argument, assumption, attitude, causation, consequence, constraint, context, decision, evidence, interpretation, issue, justification, position, solution, and viewpoint.

To represent the scenario components and their relationships clearly, we developed a hypertext-based schema for each area of model components. See Appendix A for informal definitions of scenario components which enable language-based understanding for marking up, and Appendix B for each area of the schema explaining relationships between components. The displays of each area share components with other areas to help users maintain their overall context as they change between schema views. 2.2. Annotation for Traceability Without traceability information, the usefulness of scenarios and models for interpreting requirements is limited [22]. As we discovered from our previous research [23], annotating scenarios is labour-intensive and requires expert domain knowledge. Nevertheless, we provide this annotation feature for the tool to support traceability between scenario narratives and model components. The benefit of marking up scenario narratives is that facts in narratives can be traced to models of the scenario expressed in the schema, and then generation hints can be accessed. Furthermore, relationships between facts and argument implicit in a text are made explicit by the act of mark-up and modelling. However, the extent of mark-up will be determined by time and resources available. Ideally, a thorough mark-up will yield benefits in improved knowledge generated via modelling and expanding the scenario set. To operate the tool as illustrated in figure 2, we select a scenario narrative marked up with scenario components.

5

The mark-up tags follow HTML notation. For instance, in US test site , the tag without ‘/’ (e.g.) indicates the starting point of the component and the tag with ‘/’ (e.g. ) indicates the end point of the component. The scenario annotation tool allows overlapping on marking up the scenario text, e.g. scenario narrative scenario narrative scenario scenario narrative scenario

When annotating scenario text, users segment the text by sentences, phrases, or prepositional clauses. The scenario can be searched for nouns (agents, objects, etc.) and verbs (actions, tasks, etc.) which suggest mark-up labels. However, other components such as goals require more experience, and argumentation components in the communication area require most mark-up training. The scenario can be exhaustively marked up with all the schema areas on partially marked up according to the user’s wish. On double-clicking any component tag (e.g. , ) of the marked-up scenario, the scenario annotation editor (figure 3) opens the relevant schema diagram and highlights the related components. For instance, when we analyse a scenario which is already marked up with model components, we may not be sure what a component ‘agent’ means. In this case, we just double click the tagged segment in the scenario narrative (e.g. A radar detects…), then the tool shows a relevant schema area ‘Actor’ where the Agent component belongs, and highlights the ‘Agent’ node in the schema diagram. We can also find the Agent’s definition with synonyms, relationships with other components, and component properties by moving the cursor on the node (see appendix A for the details of scenario components). 2.3. Scenario generation help The advisor tool helps users to generate new scenarios or produce variations on existing scenarios by providing scenario generation hint questions (e.g. How does reliability of machine agent affect achieving a task?). Scenario generation hints are accessed by clicking on a component node (e.g. Agent) in a schema diagram, then the tool shows a pop-up window listing properties of the selected node (e.g. Human agent’s Capability), as shown in figure 4. Once one of the properties is selected, the tool displays the hints with an example. A typical session using the advisor tool consists of the following steps: •

select a scenario marked up with schema components from the scenario database;



trace relevant components in the scenario schema;



select a component from the schema to reveal definitions, synonyms and properties;



select a property of the component to reveal scenario generation hint questions, e.g. select one of human agent’s properties;

6



answer the hint questions using schema as a reference to identify relationships between relevant components.

There are 56 scenario generation hint questions overall, and some examples of the hint questions are shown in table 1. We provided one hint question for each property, or one for each component if the component does not have property options. The hints are derived from interactions between causal factors in error taxonomies [15, 24].

3. Evaluation Method We conducted a series of experiments to investigate the effectiveness of the scenario advisor tool. The motivation was to test whether the tool would be more effective than the equivalent paper-based method and, more importantly, to gain insight into how the tool was used in the process of generating scenarios, i.e. user strategies. Ten participants were asked to complete a set of tasks individually without the scenario advisor tool but with the same paper-based information, and a further ten individuals were tested with the tool. All the sessions started by participants completing a pre-test questionnaire to collect their experience profiles. The users were then asked to read the experimental instructions. For users without the tool, the experimenter explained for five minutes how the paper-based information (scenario taxonomy tables and schema diagrams) could be used to complete the tasks. For users with the tool, the experimenter demonstrated the tool for five minutes. Then the users were given ten minutes’ familiarisation time, during which they were asked to complete a training task for finding scenario components and properties from the information provided. There were two experimental tasks. First, the participants were asked to write new scenarios based on the ‘Missile Mission’ scenario provided as both plain and marked-up narratives, within 15 minutes. They were then asked to generate as many scenario variations of the ‘Missile Mission’ scenario as they could with or without the tool, for another 15 minutes. The tasks used for experiments with and without the tool were identical. All the sessions were audiorecorded while one experimenter observed each participant completing tasks. After completing the task, each participant with the tool was asked to fill out a post-test questionnaire to evaluate user satisfaction with the tool, and all sessions ended with debriefing interviews. 3.1. Experimental subjects and materials The experiment design was between groups with two conditions, with and without the tool. Eight postgraduates and two researchers (mean age = 28 years; mean computer use = 10.3 years; 6 male / 4 female) participated in the experiment without the tool. None had experience in scenario-based design, although three had some experience in writing scenarios at a novice level. Only one participant was familiar with the military domain.

7

Nine postgraduates and a researcher (mean age = 27.5 years; mean computer use = 10.5 years; 6 male / 4 female) participated in the experiment with the tool. None had experience in scenario-based design although three had some experience in writing scenarios at a novice level. Two participants said they were familiar with the military domain, but there were not domain experts. The scenario advisor was run on a laptop computer (Windows 2000) in the laboratory. Participants were asked to use the tool to get advice while they were completing two tasks: (1) writing a new scenario within the same domain as the scenario provided; (2) writing variations of the scenario provided. Table 2 summarises the experimental task instructions. While the users were using the advisor tool, they were asked to think aloud about any problems, and the observer filled in a usability observation form. Once they had completed the tasks, users were interviewed with the following debriefing questions: •

What was the most difficult part of this experiment session?



What strategies did you use: for writing new scenarios? for writing variations of the ‘Missile Mission’ scenario?



Did you find the scenario generation hints useful?



What features should be added to this tool to make it more useful and effective? Please tell me any design ideas you might have.

4. Results and Analysis Performance data was assessed by comparing subject scenarios to a standard solution. For each sub-task, we graded the scenario produced on a 50-point scale which was designed to reflect the increase in variety and interest in the generated scenario compared to the original, using a mixture of objective and subjective measures, as follows: (a) 10 points for using different scenario components from the original scenarios. For task 1, where the schema has a total of 50 components, this measures the % increase in components used from the original scenario, e.g. 6 of the 21 components in the new scenario are different gives (6/15)* 100 = 40%, rescaled and rounded to 1..10 = 4 points. For task 2 many scenario variations were produced so this was measured as net component increase in all scenario variants. (b) 10 points for using different scenario schema areas from the original scenarios. Since there were 5 areas in the schema this measure amplified the score in (a) for use of more diverse components compared with the original, e.g. 2 areas in original, 2 more in scenario variants = 100% increase or 10 points. (c) 10 points for the length of the scenario. This was a simple word count, taking the % increase over the initial scenario, rescaled as before to 1-10.

8

(d) 10 points for different themes or plots. This was a subjective measure of scenario (task 1) and set of scenario variations (task 2) by an independent expert, who was asked to rate the scenarios for the variety in themes compared to the initial scenario. The expert was asked to rate the scenarios created by all subjects and use the scale to express worst = 0, best = 10 points. (e) 10 points for a good and detailed story. This was a subject measure of scenario (task 1) and set of scenario variations (task 2) by an independent expert, who was asked to rate the scenario for interest and insight produced, given the initial scenario as a reference point and following the same practice as in (d). Pre-test questionnaires were used to collect user profiles, while post-test questionnaires assessed user satisfaction. We used observation notes and audio-recordings of evaluation sessions to analyse usability problems and users’ scenario generation strategies. Debriefing interviews followed up observed usability problems and collected user suggestions for improvements. 4.1. Task Performance All twenty participants completed the first experimental task (writing new scenarios) although two did not want to write any scenario variations for the second task (writing variations) because of their lack of motivation. As shown in tables 3a and 3b , the overall user performance for task 1 (writing new scenarios) was significantly better with the tool (T-Test: df=18, t=2.8432, p= 0.0053 indicates that there is a difference in their means). From observation during the experiments, an explanation for this performance difference appeared to be that users were able to find information about the scenario components and their relationship more easily with the hypertext tool. However, two users with the tool (users 12 and 13) showed poor motivation in completing the task and stated that they did not want to think or write about war-related topics. The users’ experience in scenario writing did not seem to affect their task performance although the low number precluded testing for this effect with an ANOVA. On the other hand, their domain knowledge in the military context seemed to affect their task performance positively since the top-ranking performance was achieved by subjects 9, 15, and 20, who had some knowledge of the military domain at the novice level. For task 2 (writing variations of scenario provided), the overall user performance was also significantly higher with the tool (tables 4a and 4b). The mean task performance score with the tool was 43.5, and 25 without the tool (df=18, t-value=2.4450, p= 0.0124). Four users without the tool performed poorly on this task, while none showed poor performance with the tool.

9

From the session observation, it was apparent that providing scenario generation hint questions improved the performance of users with the tool; while the main reason for lower performance was lack of domain knowledge (users 1, 5, 6, and 10). Although all the users without the tool were instructed by the experimenter in how to generate scenario variations, with identical examples, users 6 and 10 wrote inappropriate variations while users 1 and 5 did not finish the task within the time limit. For the second task, the users’ experiences in scenario writing did not affect their task performance, but again the users with some domain knowledge were the best performers. Without the tool, female users’ performance was lower than male users, possibly because female users were less motivated to write about the military domain. With the tool, although male users performed slightly better for task 1, female users’ performance for task 2 was higher. However, there were no consistent gender differences, since task performance on totals was insignificant (binominal test). 4.2. Usability and User Satisfaction The user problems were categorised into three sub-sets: content problems, conceptual problems and missing requirements. The two content problems were: one incorrect property list for machine agent was found; and three users reported that unfamiliar terminology (i.e., schema, component area, and related components) made it hard to understand the use of information provided. Two misleading cues were reported: confusion at finding the same components in different areas of schema diagrams (e.g. component in [Actor] and [Task] area); and confusion about the functions of component labels used for tagging (e.g. a user tried to use this button to reveal a relevant area). Four missing requirements were as follows: •

Need scrollable or resizable textboxes for definitions, synonyms and properties in the schema window.



Need help with software usage.



Need glossary for unfamiliar terminology (e.g. Area, Property, Schema, Related components).



Highlighted mark-up tags from the annotation editor need to be automatically reset for a new component search.

Despite the above problems, users found the tool very useful, as illustrated by the satisfaction level in the post-test questionnaire shown in figure 5. Two users complained that the scenario used for the experimental task was not easy to understand because of their lack of domain knowledge, and this reduced their overall satisfaction level. Users’ responses to post-test questionniares were analysed in three groups: ratings of the annotator tools and traceability, ratings relevant to scenario generation functions, and general

10

usability: see table 5. Users’ satisfaction levels on a 1-7 scale were above 5 except for two questions: (1) finding the advice required, and (2) comprehensibility of the scenario narrative used for the tasks. Although they were still favourable, some users experienced problems in finding the property options, as not all the components had them. Also, some reported that the ‘Missile Mission’ scenario was difficult to understand and one user mentioned that he might have performed better if he were doing the same task with a business scenario. From the debriefing interviews, the most difficult part of the experiment was writing scenarios because of the users’ lack of experience and lack of domain knowledge. Users found the scenario generation hints especially useful for writing scenario variations; some mentioned that more examples within the scenario generation hints would improve the tool. Users also mentioned that the tool would be more useful if it provided help on how to use it and on writing a scenario with a step-by-step procedure. 4.3. Scenario generation strategies The observers’ notes and audio-recordings of verbal protocols from the evaluation sessions were analysed to determine the users’ strategies for writing scenarios with and without the tool. The strategies were classified into the following categories: •

Copy existing scenarios: users copied the initial seed scenario and added some variations to it.



Use plain/marked-up scenario: specifies which original scenario was used for copying or as starting point.



Visualise with schema diagrams: users followed links from marked up scenarios, and browsed the schema.



Look up scenario components: explored properties of components.



Look up taxonomy tables/hint questions: explored hints after browsing schema components.



Use imagination: generated new variations without reference to schema/hints, although schema/hints were often visible on the tool user interface.



Use domain knowledge: generate new variations by reference to domain knowledge.



Generate new scenario: the act of recording the new scenario narrative or variation.

Strategies were recorded in sequence, so most users started by copying part or all of the original scenario, then explored the schema and hint questions, and applied their imagination before recording a new scenario. However, there were several variations on this basic sequences, as illustrated in figures 6 and 7. Without the tool, users used six different strategies (1A to 1F) for writing new scenarios, and four strategies (2A to 2D) for generating scenario variations, as shown in figure 6. For writing new scenarios, three out of ten users inspected the scenario schema diagrams and

11

eight out of ten users looked at the scenario taxonomy tables for their reference. When writing scenario variations, however, six of the ten users used the scenario schema diagrams and nine looked up the scenario taxonomy tables before writing variations. Only one user used domain knowledge for writing new scenarios while all users relied on their imagination for writing both scenario variations and new scenarios. Figure 7 represents scenario generation strategies used with the tool. There were six different strategies (3A to 3F) for writing new scenarios and four strategies (4A to 4D) for generating scenario variations with the tool. The figure shows that eight of the ten users preferred to use the marked-up scenario for writing new scenarios and all ten users for writing scenario variations. This is because they found it easier to trace scenario components from the marked-up scenario in the annotation tool rather than looking up the information they needed in the schema. For writing new scenarios, five subjects used the schema diagrams and eight looked up the scenario components with the tool; all eight relied on their imagination since only two had military domain knowledge. For writing scenario variations, all ten users used the schema diagrams but only three of them did not look up the details of the scenario components. Eight users checked the scenario generation hint questions to write the variations. Figure 8 summarises the most popular scenario generation strategies used for different situations. Users without the tool did not look up the relationship between scenario components illustrated in the schema diagrams because they found it difficult to find component nodes in the schema. Users with the tool, however, tried to use all the information provided because it was easier to find the information they needed.

5. Discussion Development of the Scenario Annotator/Generator has made two main contributions. First we have developed a new schema for representing scenario-based knowledge that is semantically richer than the LEL framework [16], and converges with the ontologies in requirements modelling languages such as i* [18, 19]. However, we have added semantics describing arguments expressed by users about the domain, so the connection between causal argument, design rationale and components in specifications can be explicitly represented and traced. The generator tool provides assistance for generating scenarios via a knowledge-base of potential interactions between schema components, their properties and realtionships. The generation hints produced from this knowledge have to be interpreted in the perspective of domain knowledge, which is why we chose the user driven approach to scenario generation. However, in our future work we will explore tailoring the schema with more domain specific knowledge so partially automatic generation of scenario variants may be possible.

12

Chance and Melhart [20] developed a taxonomy of scenarios to help developers identify classes of scenario. Their taxonomy is organised into a hierachy according to purpose (e.g. operational scenario, failure scenario, performance scenario, refinement scenario and learning scenario). Each type of scenario is then described in terms of key attributes (description, creators/users, information needed, uses). However, their view on scenario taxonomy is different from ours as we focused on scenario components and their relationships. Our scenario schema has gone beyond i* by identifying environmental and communication factors. The CREWS-SAVRE project supported scenario generation by providing the structured templates, with style and content guidelines from the CREWS-ECRITOIRE method [25, 26]. The style guidelines provide recommendations on the expected form and content of scenarios. Scenarios were defined as pathways consisting of sequences of actions, and interactions between agents. However, scenario actions can be affected by many other factors, and the CREWS tool does not provide rich information about components to be used for writing scenarios. On the other hand, our tool provides a rich scenario taxonomy with schema diagrams which can explain not only scenario actions but also factors that could affect these actions: the environment and arguments justifying user goal, directions and assumptions. The ART-SCENE process [27] integrated CREWS-SAVRE scenario generation with other modelling tools, i.e. REDEPEND, AutoFocus, Agent Sheets. The integrated environment provides system engineers with a plug-and play simulation for exploring requirement-architecture compliance for different scenarios. In ART-SCENE, system engineers generate scenarios from use case specifications according to CREWS-SAVRE, then walk through each generated scenario with stakeholders using the web-based Scenario Presenter tool to document all requirements discovered. Thus, the ART-SCENE environment focuses on making requirements testable, simulating archtiectural designs and system behaviour in its environment. We discovered that users can generate better scenarios by using our advisor tool, and this should lead to improvements in requirements elicitation and validation. The scenario advisor tool helped users to write sounder scenarios without any domain knowledge, and is also useful for generating more variations of existing scenarios by providing scenario generation hints for each property of model components. The scenario annotation editor allows users to find information easily by providing traceability between scenario narratives and the scenario model. However, it is labour-intensive and needs domain experts with knowledge of scenario taxonomy to mark up scenarios. The main benefit of using the tool, we suspect, will be that it stimulates thought, and this will lead not only to a richer set of scenarios but also to greater insight into requirements problems that are implicit in scenarios. Helping generate scenario variations will be beneficial as test data for requirements validation, following methods such as ScenIC [12] in which scenarios are used as challenges and obstacles to validate 13

requirements for achieving user goals. While the tool can not guarantee a necessary and sufficient set of scenarios, it is one step on the long road to providing more relevant and appropriate sets of scenarios. Our experimental participants used different strategies for writing new scenarios and for generating scenario variations with and without the tool. The subjects used more information and scenario components with the tool than with the paper-based method, and this may have led to better task performance. The results from debriefing interviews showed that the users needed help with scenario generation procedure, and domain knowledge. Since our users did not have any expert knowledge in the military domain, their imagination for completing the tasks was limited. Further study is necessary on the relationship between external decision support and users’ mental processes in scenario-based reasoning to improve design of tool support. For instance, some users mentioned that the tool could be improved if it provided a scenario template or a step-by-step procedure on how to write a good scenario in a specific domain. We will therefore improve the scenario advisor tool by providing support for the user strategies we observed as well as conducting further empirical studies. We will also respond to our subjects’ suggestions by developing a help system for a step-by-step scenario generation procedure or a scenario template and providing domain-specific information with more examples. Finally, future work will develop an automatic scenario generation tool with information extraction techniques [28] to address the labour-intensive nature of scenario mark-up. If text scenarios could be automatically transformed into marked up models with minimal human assistance, it may be possible to automatically generate scenarios from a limited set of domain knowledge supplied by the user.

6. Acknowledgements This work was funded by EPSRC Systems Integration Programme SIMP project (Systems Integration for Major Projects).

14

7. References 1

Carroll JM. Making use: scenario-based design of human-computer

interactions. Cambridge MA, MIT Press, 2000 2

Carroll JM (ed.). Scenario-based design: envisioning work and technology in system development. New York, Wiley, 1995

3

Carroll JM, Mack RL, Robertson SP, Rosson MB. Binding objects to scenarios of use. International Journal of Human-Computer Studies 1994; 41: 243-276

4

Rolland C, Achour CB, Cauvet C, Ralyte J, Sutcliffe AG, Maiden NAM, et al. A proposal for a scenario classification framework. Requirements Engineering 1998; 3 (1): 23-47

5

Sutcliffe AG, Ryan M. Experience with SCRAM: a SCenario Requirements Analysis Method. In: Proceedings IEEE International Symposium on Requirements Engineering: RE-98, 6-10 April 1998, Colorado Springs CO. Los Alamitos, CA, IEEE Computer Society Press, 1998. pp. 164-171

6

Hertzum M. Making use of scenarios: a field study of conceptual design. International Journal of Human-Computer Studies 2003; 58: 215-239

7

Carroll JM. Scenario-based design. In: Helander MG, Landauer TK, Prabhu PV, (eds.). Handbook of human computer interaction. Amsterdam, Elsevier, 1997. pp. 383-406

8

Zhu H, Maiden NAM, Pavan P. Scenarios: bringing requirements and architectures together. In Proceedings 2nd International Workshop on Scenarios and State Machines: Models, Algorithms and Tools (SCESM): ICSE-03, Portland OR. Los Alamitos CA, IEEE Computer Society Press, 2003

9

Haumer P, Pohl K, Weidenhaupt K. Requirements elicitation and validation with real world scenes. IEEE Transactions on Software Engineering 1998; 24 (12): 1036-1054

10 Weidenhaupt K, Pohl K, Jarke M, Haumer P. Scenario usage in software development: current practice. IEEE Software 1998; 15: 34-45 11 Cunning SJ, Rozenblit JW. Test scenario generation from a structured requirements specification. In: Proceedings IEEE Conference and Workshop on Engineering of Computer-Based Systems, Nashville TN. Los Alamitos CA, IEEE Computer Society Press, 1999. pp. 166-172 12 Potts C. ScenIC: a strategy for inquiry-driven requirements determination. In: Proceedings 4th IEEE International Symposium on Requirements Engineering, 7-11 June 1999, Limerick, Ireland. Los Alamitos CA, IEEE Computer Society Press, 1999. pp. 58-65 13 Sutcliffe AG, Shin JE, Gregoriades A. Tool support for scenario-based functional allocation. In Johnson CW (ed.) Proceedings 21st European Conference on Human

15

Decision Making and Control, 15-20 July 2002, Glasgow. 2002. pp. 81-88 14 Reason J. Human error. Cambridge, Cambridge University Press, 1990 15 Reason J. Managing the risks of organizational accidents. Aldershot, Ashgate, 2000 16 Leite J, Hadad GDS, HoracioDoorn J, Kaplan GN. A scenario construction process. Requirements Engineering 2000; 5 (1): 38-61 17 Kaindl H. A practical approach to combining requirements definition and object-oriented analysis. Annals of Software Engineering 1997; 3: 319-343 18 Mylopoulos J. Information modeling in the time of the revolution. Information Systems 1998; 23 (3/4): 127-156 19 Yu E. Towards modelling and reasoning support for early-phase requirements engineering. In: Proceedings Third IEEE International Symposium on Requirements Engineering. Los Alamitos CA, IEEE Computer Society Press, 1997. pp. 226-235 20 Chance BD, Melhart BE. A taxonomy for scenario use in requirements elicitation and analysis of software systems. In: Proceedings IEEE Conference and Workshop on Engineering of Computer-Based Systems, Nashville TN. Los Alamitos CA, IEEE Computer Society Press, 1999. pp. 232-238 21 Dearden A, Harrison M, Wright P. Allocation of function: scenarios, context and the economies of effort. International Journal of Human-Computer Studies 2000; 52 (2): 289-318 22 Egyed A. A scenario-driven approach to traceability. In: Proccedings 23rd International Conference on Software Engineering (ICSE); Toronto. Los Alamitos CA, IEEE Computer Society Press, 2001. pp. 123-132 23 Sutcliffe AG, Maiden NAM, Minocha S, Manuel D. Supporting scenario-based requirements engineering. IEEE Transactions on Software Engineering 1998; 24 (12): 1072-1088 24 Hollnagel E. Human reliability analysis: context and control. London, Academic Press, 1993 25 Achour CB, Rolland C, Maiden NAM, Souveyet C. Guiding use case authoring: results of an empirical study. In: Proceedings 4th IEEE International Symposium on Requirements Engineering; 7-11 June 1999; Limerick, Ireland. Los Alamitos CA: IEEE Computer Society Press; 1999. pp. 36-43 26 Rolland C, Souveyet C, Achour CB. Guiding goal modeling using scenarios. IEEE Transactions on Software Engineering 1998; 24 (12): 1055-1071 27 Zhu H, Jin L. Automating scenario-driven structured requirements engineering. In Proceedings 24th Annual International Computer Software & Applications Conference (COMPSAC), Taipei. 1994. pp. 311-316 28 Cowie J, Lehnert W. Information extraction. Communications of the ACM 1996; 3 (1): 80-91 16

Figures, tables and appendices Figure 1

Conceptual architecture of the scenario advisor tool, showing the major subsystems, databases and information flows

Figure 2

Example of a marked-up scenario

Figure 3

The scenario annotation editor

Figure 4

Scenario schema display with hints

Figure 5

Post-test questionnaire (individual user scores; overall satisfaction)

Figure 6

Scenario generation strategies without the tool (solid arrow indicates generate new scenario; dotted arrow indicates generate variations, shown in sequence)

Figure 7

Scenario generation strategies with the tool (solid arrow indicates generate new scenario; dotted arrow indicates generate variations, shown in sequence)

Figure 8

Summary of most popular scenario generation strategies

Table 1

Examples of scenario generation hint questions

Table2

Summary of experimental tasks (‘ ’= applicable; ‘X’ = inapplicable)

Table 3a

User performance for writing new scenarios without the tool (task 1)

Table 3b User performance for writing new scenarios with the tool (task 1) Table 4a

User performance for writing scenario variations without the tool (task 2

Table 4b User performance for writing scenario variations with the tool (task 2 Table 5

Average satisfaction scores

17

Figure 1

Selected components and properties Selected scenario

Scenario generation hint questions and examples

Marked-up scenarios

Mark-up labels

Definitions, synonyms, properties of components Scenario annotation editor

Scenario narratives database

Trace

Model components database

Scenario schema display Scenario advisor generator

Scenario generation hints database

18

Figure 2

Scenario: Missile Mission (Marked-up) < Action > < Agent > A satellite detects < Object > 'enemy missile' < Time > moments after launch from < Location > US test site. < Action > < Agent > Ground radar detects < Object > missile and < Task > plots its course . < Action > < Resource > Course data passed to < Organisation > missile defence headquarter. < Action > < Object > Intercept missile , < Attribute > consisting of 'kill vehicle' mounted on < Agent > rocket booster , is fired. < Action > < Activity > < Object > Kill vehicle uses < Resource > information from < Agent > ground radar and < Resource > its own sensors < Activity > to 'lock on' the < Object > enemy missile .

19

Figure 3

Scenario text (narrative)

Scenario component search

Scenario components labels for marking up

20

Figure 4

21

Figure 5

6.6 6.4

User satisfaction (1 Poor to 7 Excellent)

7 6 5

5.7

5.6 5.5

5.5

5.4 4.7 4.1

4.5

4 3 2 1 0 Users with the tool

22

Figure 6

Scenario

1.

2.

3.

4.

5.

generation

Copy

Visualise

Look up

User

Use

strategies:

plots &

plots &

scenario

imagination

domain

Number of

themes

themes

taxonomy

users

of

with

tables

using the

existing

schema

strategies

scenarios

diagrams

1A:

O

knowledge

6. Generate new scenario/ variations

O

O

O

O

1 user 1B:

O

O

4 users 1C:

O

O

O

O

1 user 1D:

O

O

O

O

O

O

O

1 user 1E: 2 users 1F:

O

1 user

O

O

O

2A:

O

O

O

O

O

O

O

O

O

O

O

1 user 2B: 4 users 2C:

O

2 users 2D: 3 users

O O

23

Figure 7

Scenario

1.

2.

3.

4.

5.

6.

7.

8.

9.

generation

Use

Use

Copy

Visual-

Look

Use

Use

Use

Write

strategy:

mark

plain

plots of

ise plots

up

imagi-

scena-

domain

new

Number of

ed-

scen-

exist-

with

scena-

nation

rio

knowle-

scena-

users using

up

ario

ing

schema

rio

genera-

dge

rio/

the

scen-

scena-

diagrams

compo-

tion

varia-

strategy

ario

rios

nents

hint

tions

questions

3A:

O

O

O

O

1 user 3B:

O

O

O

O

O

O

O

1 user 3C:

O

O

O

2 users 3D:

O

O

O

O

O

O

O

1 user 3E: 4 users

3F:

O

O

O

O

O

O

O

O

1 user 4A:

O

O

O

O

O

O

O

4 users 4B: 1 user

4C:

O

O

O

O

O

O

2 users 4D: 3 users

O

O O

O

O

24

25

Figure 8

Without the tool

With the tool

Writing new

Strategy 1B

Strategy 3E

Copy plots & themes of

Use marked-up scenario

scenarios

existing scenario

 Visualise plots &

narratives  Look up

themes with schema

scenario taxonomy table

diagrams  Use scenario

 Use imagination 

components  Copy plots

Generate new scenario

& themes of existing scenario narratives  Use imagination  Generate new scenario

Writing

Strategy 2B

Strategy 4A

variations

Use scenario taxonomy

Use marked-up scenario

table  Use imagination

 Visualise plots &

 Generate scenario

themes with schema

variations

diagrams  Look up scenario generation hint questions  Generate scenario variations

26

Table 1

Area

Component

Property

Hint question

Actor

Agent

Human

How would poor capability of a

capability

human agent affect completion of the task, or achievement of the goal?

Agent

Machine

How would lack of reliability of a

reliability

machine agent affect completion of the task, or achievement of the goal?

Task

Task

Complexity

How would increased complexity of a task delay completion, or change the outcome?

Environment

Intention

Physical

Fatigue

How would increased fatigue

environment

decrease reliability of agents?

Goal

Which agents’ resources / tasks could prevent completion of the goal? Does the goal depend on subgoals?

Communications Attitude

Is the attitude of the agent reliable? Is the attitude of the agent justified?

27

Table 2

TASK 1: Writing new scenarios

Without

With the

the tool

tool



Read the plain or marked-up scenario (paper-based) of “Missile Mission”







Select the marked-up scenario titled “Missile Mission” from the scenario database on the annotation editor

X



Use documents, scenario taxonomy, or scenario schema diagrams for necessary information



X

X







Without

With the

the tool

tool

• • •

Use the advisor tool to find necessary information Write down as many new scenarios in the domain of the “Missile Mission” scenario as you can within 15 minutes

TASK 2: Writing scenario variations •

Read the plain or marked-up scenario (paper-based) of “Missile Mission”







Select the marked-up scenario titled “Missile Mission” from the scenario database on the annotation editor

X



Use documents, scenario taxonomy, or scenario schema diagrams for necessary information



X

X







• •

Use the advisor tool to find necessary information



Check the scenario generation hints for any components of the scenario if necessary (the paper-based information does not include the scenario generation hints, but verbal hints were given by the experimenter whenever asked by the subjects)



See the example of the scenario variation on your answersheet







Write down as many variations of the “Missile Mission” scenario as you can within 15 minutes





28

Table 3a

User #

Performance Score

Rank

(Gender)

(Max. 50)

1 (M)

20

6

None

None

2 (M)

40

1

None

None

3 (M)

40

1

None

None

4 (M)

30

5

None

None

5 (F)

20

6

Novice

None

6 (F)

5

10

Novice

None

7 (F)

20

6

Novice

None

8 (M)

40

1

None

None

9 (M)

40

1

None

Novice

10 (F)

10

9

None

None

Mean

26.5

Rank

Experience in

Domain knowledge

Experience in

Domain knowledge

scenario writing

Table 3b

User #

Performance Score

(Gender)

(Max. 50)

11 (F)

40

6

Novice

None

12 (F)

20

10

None

None

13 (M)

30

9

None

None

14 (F)

40

6

Novice

None

15 (M)

50

1

None

Novice

16 (M)

50

1

None

None

17 (M)

50

1

Novice

None

18 (M)

45

5

None

None

19 (F)

40

6

None

None

20 (M)

50

1

None

Novice

scenario writing

29

Mean

41.5

30

Table 4a

User

Performance Score

Experience in

Domain knowledge

(Gender)

(Max. 50)

scenario writing

1 (M)

0

None

None

2 (M)

40

None

None

3 (M)

40

None

None

4 (M)

50

None

None

5 (F)

0

Novice

None

6 (F)

0

Novice

None

7 (F)

40

Novice

None

8 (M)

30

None

None

9 (M)

50

None

Novice

10 (F)

0

None

None

Mean

25

Domain knowledge

Table 4b

User

Performance Score

Experience in

(Gender)

(Max. 50)

scenario writing

11 (F)

50

Novice

None

12 (F)

40

None

None

13 (M)

30

None

None

14 (F)

50

Novice

None

15 (M)

50

None

Novice

16 (M)

50

None

None

17 (M)

30

Novice

None

18 (M)

35

None

None

19 (F)

50

None

None

31

20 (M)

50

Mean

43.5

None

Novice

32

Table 5

Post-test Questions: User satisfaction with the tool 1(poor) - 7(excellent)

Total Average (N=10)

Trace-

Usefulness of traceability

5.8

ability

Comprehensibility of contents of the schema

5.8

Usefulness of definitions and synonyms of

5.6

with the tool

components Comprehensibility of the components’ relationships

5.3

Finding related components

5.7

Help on

Usefulness of scenario generation hints

5.1

scenario

Comprehensibility of the advice

5.4

Usefulness of the hint examples

5.5

Finding the advice required

4.5

General

All functions free from error

5.4

usage of

Remembering to return to functions once visited

5.8

Ease of use

5.3

Usefulness

5.7

Interface design

5.3

Comprehensibility of the scenario narrative used

4.9

generation

the tool

33

Appendix A. Scenario Taxonomy Tables Actor Area Component Activity

Definition

Synonym

Property

Manifestations of behaviour;

behaviour, (cf. complexity, physical/

reported history; collections of

task)

cognitive activity

actor, artefact

human agent:

action; higher order units. Agent

A human or machine that carries out a task; one that acts or has the

capability, expertise,

power or authority to act; has

motivation, aptitude,

subtypes of human and machine

responsibility, authority,

agents.

trust

Machine agent: reliability, task support, usability, utility Attribute

Information or data that describes

characteristic,

an agent or object.

feature, quality,

N/A

property Group

An informal or temporary assembly cluster,

capability, motivation,

of agents.

formation

authority, trust

constitution,

capability, motivation,

Organisation The persons (or committees or departments, etc.) who make up a

unit, structure, authority, trust

governing body and who work

group

together; it has a leader. Physical

Something physical which is made

authority,

structure

up of a number of parts that are

compound,

held or put together in a particular

object

way.

container

The characteristic and expected

function,

social behaviour of an agent.

occupation,

Role

size

N/A

position

Intention Area Component

Definition

Synonym

Property

34

Intention Area Component Goal

Definition

Synonym

The purpose toward which an

intention,

Property importance, quality

endeavour is directed; future intent. purpose, aim Objective

Something worked toward or striven intention, target importance for; higher order goal.

Plan

A descriptive specification of plans. strategy,

effectiveness, clarity

approach Policy

A plan or course of action, as of a

principle,

government, political party, or

mission

effectiveness, clarity

business, intended to influence and determine decisions, actions, and other matters; a set of statements used to guide, a principle, or procedure considered expedient, prudent, or advantageous.

Task Area Component Action

Definition

Synonym

Property

The process of acting or doing that results in

behaviour,

complexity,

change in the world; a physical change, as in

activity,

duration,

position, mass, or energy, that an object or a

procedure

physical/

system undergoes; actions are either continuous

cognitive

or discrete; actions have a duration. Event

Object

Something that takes place; events can be

occurrence,

contained in a message.

happening

N/A

Physical or conceptual thing that exists over time entity

physical/

and is either changed by a task or changes other

conceptual

objects. In object-oriented programming, objects include data and the procedures necessary to operate on that data. Procedure

A set of instructions that performs a specific

method, course N/A

task; a sub-routine or function.

of action, process

Resource

Something that can be used for support or help; means, tools, an available supply that can be drawn on when

capability,

needed; the ability to deal with a difficult or

materials

N/A

troublesome situation effectively.

35

Task Area Component State

Definition

Synonym

A type of attribute of an object, an agent, or a

Property

condition

N/A

A piece of work assigned or done as part of

duty,

predictability,

one's duties; a function to be performed; an

procedure,

complexity,

objective.

method

reliability

task. Task

Environment Area Component Social

Definition Social circumstances surrounding an

environment agent or group of agents.

Synonym

Property

community,

management

public factors

culture, time pressure, stress

Economic

Economic circumstances surrounding an cost, financial

N/A

environment agent or group of agents.

factors

Physical

tangible

weather state (wind,

context

cloud, etc.), climate,

Physical circumstances surrounding an

environment agent or group of agents.

noise, interruptions, fatigue, stress Location

Situation

Identifying the position of a plan, an act,

place, situation, N/A

or a site, etc.

locality

Complex state description of context for

circumstance,

activity and background circumstances.

state, condition state, temporary,

logical / physical

critical Time

A non-spatial continuum in which events period, duration time period, time occur in apparently irreversible

point, scale

succession from the past through the present to the future; an interval of time characterised by properties and the occurrences of certain conditions, events, or phenomena. Environment Social, economic and physical

N/A

N/A

environment.

Communication Area Component

Definition

Synonym

Property

36

Communication Area Component Argument

Definition

Synonym

A discussion about a topic or design problem,

opinion, assertion,

often with positions for and against; a

claim, dispute

Property N/A

controversial communication. Assumption

Something taken for granted or accepted as

hypothesis,

true without proof; a hypothesis that is taken

supposition

N/A

for granted. Attitude

Causation

An opinion, a state of mind or a feeling;

approach, mind-set, N/A

disposition

posture, belief

The act or agency or process of causing, by

causality, cause,

which an effect is produced.

reason, source

Consequence A logical or natural conclusion or a result which effect, outcome, follows from an action or condition. Constraint

Decision

N/A

result

Something that controls decisions, prevents or restriction, limitation, N/A restricts actions of others.

Context

N/A

check

The circumstances and events surrounding or circumstance, leading up to an event or occurrence.

setting

A judgment or conclusion on an issue, or

conclusion,

making up one’s mind.

judgment,

N/A

N/A

assessment, evaluation Evidence

A proof or an example helpful in forming a

proof, data,

conclusion or judgment.

verification

Interpretation Explaining the meaning of; making sense of.

decoding,

N/A

N/A

understanding, explanation Issue

An important question that is in dispute and

subject, topic

N/A

must be settled. Justification

The act of justifying; a fact or circumstance that validation, good shows an action to be reasonable or

N/A

reason, explanation

necessary. Position

A point of view or attitude on a certain

situation, standpoint, N/A

question; social standing or status; a situation

status, purpose

as it relates to the surrounding circumstances; it separates viewpoints from alternative arguments. Solution

The method of answering questions; solves a

result, explanation

N/A

37

Communication Area Component

Definition

Synonym

Property

problem; achieves a goal. Viewpoint

A position from which something is observed or perspective, opinion N/A considered.

38

Appendix B. Scenario Schema Diagrams Actor Area

39

Intention Area

40

Task Area

41

Environment Area

42

Communication Area

43