Rich-Media Scenarios for Discovering ... - Salisbury University

3 downloads 0 Views 2MB Size Report
ever, implementing these rich-media scenarios poses two ... rich-media representations in requirements ..... rich-media files linked to the scenario using a.
feature requirements

Rich-Media Scenarios for Discovering Requirements Konstantinos Zachos and Neil Maiden, City University, London Amit Tosar, M&G Investments

The ART-SCENE tool supports using richmedia scenarios in requirements discovery. These scenarios help stakeholders recognize events that a system will have to handle.

0740-7459/05/$20.00 © 2005 IEEE

alking through scenarios is an effective technique for discovering requirements,1 but scenarios can differ widely in their abstraction levels and representation forms. Some requirements analysts use scenarios that describe a system’s external visible behavior.2 Others use live sequence charts to model a system’s dynamic behavior.3 Requirements analysts also use scenarios for everything from explaining a current system4 to walking through a future system’s

W

behavior to discover its requirements.5 However, determining the right form of scenarios for different requirements tasks remains an open question—one we’re investigating with ART-SCENE. ART-SCENE is an Internet-based environment for generating and walking through text scenarios. Case studies have shown its effectiveness for discovering requirements for software systems, such as those used in air-traffic management.6 However, we believe that adding rich-media representations of scenario events— visual, video, and audio—to text descriptions can make requirements discovery more complete. Text-scenario walk-throughs are effective because people are better at identifying errors of commission rather than omission; that is, it’s easier to recognize rather than recall events a system will need to handle.7 ARTSCENE scenarios provide recognition cues (in

the form of automatically generated text events) that stakeholders can use to discover new requirements. Our argument is simple. We believe that rich-media scenarios will give stakeholders more recognition cues than text scenarios will. Whereas text scenarios describe agents’ behavior, rich-media scenarios also describe the agents’ environment and other information that text descriptions would keep tacit. However, implementing these rich-media scenarios poses two challenges: developing new processes for using rich-media scenarios that don’t detract from walk-throughs, and designing technologies that support how people work with scenarios.

Previous uses of rich media in requirements engineering The last 10 years have seen sporadic use of

September/October 2005

IEEE SOFTWARE

89

More recent studies reveal that pairing rich-media scenario representations and text scenarios might improve requirements discovery.

rich-media representations in requirements processes. The Advanced Multimedia Organizer for Requirements Elicitation (AMORE) supported rich-media representations’ use in diverse requirements tasks, such as storing video recordings of requirements meetings.8 Analysts used AMORE to store requirements as close to their natural forms as possible to maximize traceability and understanding. However, AMORE didn’t provide principles for using different richmedia documents in requirements tasks, so we can’t generalize from its use. CREWS-EVE also used rich-media representations of system scenarios to create traces to the future system’s requirements.9 It addressed applications for which text-based scenarios alone were insufficient, and it used system goals to guide the capture of rich-media descriptions of the current system. Although CREWS-EVE offered strong traceability capabilities, it didn’t guide the process of walking through scenarios to discover requirements. More recent studies reveal that pairing richmedia scenario representations and text scenarios might improve requirements discovery.10 Researchers gave inexperienced analysts a normal course-scenario description of an information system in PowerPoint. Six analysts received a text version, and six received a version with additional rich-media representations, such as images and videos of the normal course-scenario events. The analysts with the rich-media scenario discovered significantly more alternative courses than the ones with the text scenario (82 versus 39). This evaluation suggests rich media’s potential benefits in requirements discovery—benefits we can obtain through ART-SCENE’s new capabilities.

Enhancing ART-SCENE with multimedia scenarios Requirements analysts use ART-SCENE to generate and walk through text scenarios systematically to discover and document requirements with stakeholders. (ART-SCENE’s use-case specification and scenario-generation techniques are described elsewhere.11) ART-SCENE’s features include ■ ■ ■

90

IEEE SOFTWARE

guidelines for writing use-case specifications, automatic generation of scenarios from use-case specifications, and guided scenario walk-throughs with Internet-based tools.

w w w . c o m p u t e r. o r g / s o f t w a r e

Using a Web-based module, stakeholders receive a text scenario that has been automatically generated in ART-SCENE (see figure 1). The left-side menu gives different functions for viewing the scenario and requirements. The top-line buttons offer walk-through functions (such as the next or previous event) and functions to add, edit, or delete events, comments, and requirements. The left section describes the scenario’s normal course-event sequence. For example, Event 1 describes the start of the action in which the pilot calls the ground airtraffic controller. The right section describes generated alternative courses for each normal course event, presented as “what-if” questions. For example, the top-listed alternative course is “what if [when the pilot calls the ground airtraffic controller] the frequency is congested?” ART-SCENE generates and presents alternative courses for normal course events. The analyst helps stakeholders consider, for each normal and alternative course event, ■ ■ ■

whether it might occur, whether it relates to the future system, and whether the future system will handle the event.

If ART-SCENE doesn’t specify requirements to handle an event that it recognizes as relevant, then analysts have discovered omissions and can write new requirements, increasing requirements completeness. We use ART-SCENE to document all requirements and comments that arise during the walk-through. We represent ART-SCENE scenarios with text because it’s difficult to represent many alternative course events in one UML sequence diagram. Sequence diagrams are also hard for stakeholders to understand, especially during distributed walk-throughs when an analyst isn’t available to explain the diagrams. ART-SCENE’s text-scenario version has helped specify two air-traffic-management systems. Eurocontrol applied ART-SCENE to generate and walk through 10 scenarios to discover requirements for CORA-2, a conflict resolution system. The 10 half-day walkthroughs discovered 136 new requirements on top of CORA-2’s 250 existing requirements.6 The UK’s National Air Traffic Services used ART-SCENE to generate and walk through 12 scenarios to discover requirements for the DMAN airport departure man-

Figure 1. A snapshot of

ager system. The 12 half-day walk-throughs, held in different countries using Web technologies, discovered 245 new requirements. Both projects’ walk-throughs improved communication with stakeholders and gave developers confidence in their requirements coverage. One downside, however, is that these walkthroughs removed the air-traffic controllers from the context of their work, reducing the number of cues that can inform requirements discovery. So, we set out to extend ART-SCENE with rich-media scenarios to provide these contextual cues again during scenario workshops. Specific conceptual and technical challenges were to



continue to support end-user control over ART-SCENE shows one scenario content and use through features scenario for an airtraffic-management such as end-user editing.

The rich-media ART-SCENE architecture We extended ART-SCENE to support richmedia scenarios in two stages. First, we designed a conceptual architecture for ART-SCENE so that it could generate and represent richmedia scenarios. Second, we implemented this architecture as an extension of ART-SCENE’s current software architecture.

system. It describes the pushback clearance (the left side) and automatically generated alternative courses for the highlighted normal course event (the right side).

Conceptual architecture ■



integrate rich-media scenario fragments to enhance rather than detract from requirements discovery, implement this integration to enable Webbased distributed and asynchronous walkthroughs of rich-media scenarios, and

ART-SCENE’s conceptual architecture design was twofold. Our first task was to extend ARTSCENE to include rich-media representations of scenario events. Figure 2 depicts the resulting conceptual metamodel for rich-media ARTSCENE scenarios using UML notation. We in-

September/October 2005

IEEE SOFTWARE

91

Storyboard 1

0..1

1 Alternative class

Scenario

Object

Media Type

ID: String

Type: String

Type: String

Class: String 1

1 Specializes to>

0..m Alternative course Descr: String

0..m 1 Candidate for>

Event

follows>

92

IEEE SOFTWARE

Used in>

1..m 1 1

Starts> 1 Ends>

1..m

Action

Type: String 1 Verb: String



Involves> 0..m 1..2

Information Type

Agent

Type: String

Type: String Name: String

stantiated this metamodel to model instances of rich-media scenarios. A scenario is a sequence of two or more events. An event is a point in time when an action either starts or ends. An action is a description of behavior that involves one or more agents and has the attributes Type and Verb. Action types are communication, cognitive , system , physical , and complex. An action can involve one or more agents. Each agent has the attributes Name and Type. Agent types are human, machine, and composite. An action can also manipulate one or more objects. Each action can communicate one or more information types that inform rich-media selection. Information types are causal, descriptive, physical action, procedure, and role. One media fragment can be of the types text, image, audio, or video. A media fragment can be associated with zero, one, or many normal course events, and one event can have zero, one, or many associated media fragments. Furthermore, each scenario can have a richmedia storyboard composed of all media fragments associated with all events in the scenario. Our second task investigated how to design rich-media scenarios to support more efficient requirements discovery. One risk is that rich-

w w w . c o m p u t e r. o r g / s o f t w a r e

1 Used in>

2..m

0..1

Figure 2. ART-SCENE’s metamodel for richmedia scenarios.

0..1 Is sequence of>

media scenarios detract from rather than enhance requirements discovery. Therefore, we treated the ART-SCENE extension as a multimedia design problem that the analysts had to solve. We gave the analysts a principled approach to scenario design by implementing authoring guidelines12 that we extended with the information that ART-SCENE’s scenario events communicated. So, how did we develop rich-media scenarios? After the text scenario’s generation in ART-SCENE, the analysts added rich-media fragments to each normal course event, in two steps: 1. ART-SCENE infers the information types that the event description communicates, using the type of action that the event starts and types of agents involved in the action; and 2. analysts select the most appropriate media type to represent the event, given the information types the event describes. ART-SCENE automatically implements the first stage, but the second needs analyst intervention. Consider the first stage in more detail. In ART-SCENE, each action must be of the type cognitive , physical , communication , system , or composite . Each action can

involve one or two agents, both of which must be of the type human, machine, or composite. In the rich-media extension, we adopt five information types—description, physical action, role, procedure, and causal from Peter Faraday and Alistair Sutcliffe12 to type the information that each scenario event describes and communicates to analysts and stakeholders. Table 1 presents information-type selection rules. The left-hand and middle columns list the possible combinations of action and agent types that we can combine in one scenario event. The right-hand column lists the possible information types that the scenario events communicate according to the agent and action types. ART-SCENE automatically implements the rules that this table specifies to select the types of information a scenario event will represent. In the DMAN scenario in figure 1, the pilot calls the ground air-traffic controller, starting a communication action involving two human agents. By applying the information-type selection rules, we see that the scenario event communicates information about role and procedure—that is, information about human agents’ roles in the system and one procedure (the call to the ground air-traffic controller). These information types provide input to media selection rules in the second stage. Table 2 presents the media selection rules. The first column lists the information types that scenario events communicate, inferred in the first stage. The other columns describe the different media types that we can use to represent these types of scenario information. An X in a cell indicates that we should use a media type to represent the type of information indicated, while (X) indicates that the information type might be used. These media selection rules are based on Faraday and Sutcliffe’s multimedia design rules.12 ART-SCENE uses the media selection rules to guide the analyst to allocate the appropriate media resources to the information type. Our example scenario event (the pilot calls the ground air-traffic controller) contains information about roles and procedures. By applying the simple media selection rules, the analyst should use animation (video) supported by sound and speech to communicate about the pilot and controller roles and video animation and text to describe the call action.

Table 1 ART-SCENE’s information-type selection rules Action type

Agent type

Information type

Communication Communication Communication Communication Communication Communication System Cognitive Physical

Human-human Human-machine Human-composite Machine-machine Machine-composite Composite-composite Machine Human Human

Complex

N/A

Role and procedure Physical action and procedure Role, physical action, and procedure Physical action and procedure Physical action and procedure Role, physical action, and procedure Physical action and causal Physical action and causal Descriptive, physical action, procedure, and causal Physical action, procedure, and causal

Table 2 Media selection rules developers use to design rich-media scenarios* Image

Descriptive Physical action Physical action Role Procedure Causal (simple)

X X

Animation

Text

Sound and speech

(X) X X X (X) X

Causal (complex)

X X X X

(X)

*An X in a cell indicates that we should use a media type to represent the type of information indicated. An (X) indicates that the information type might be used.

The analyst can then produce, edit, and integrate rich-media fragments into the scenario.

Technical architecture We developed the text scenarios module using Microsoft Visual InterDev supporting dynamic ASP pages on top of the Microsoft Access database. This legacy architecture constrained ART-SCENE’s rich-media extension. The rich-media scenario version adopts a traditional three-layer architecture, where ■ ■



the presentation layer defines the GUI, the application layer defines system logic in the ASPs that generate the dynamic GUI and Visual Basic components, and the database layer stores most of the persistent data about scenarios in a relational database.

September/October 2005

IEEE SOFTWARE

93

We developed a new, separate application to handle rich-media scenarios, with its own presentation and application layer.

94

IEEE SOFTWARE

We considered three design alternatives for storing rich-media documents in ART-SCENE: extending the database to store or point to rich-media documents; implementing a content management system tailored to manage nontraditional, heterogeneous, and unstructured data; or using a multimedia database management system, such as MediaWay, to enable retrieving and updating rich-media data, on top of the ART-SCENE database. We chose the first option for two reasons. ART-SCENE is a legacy system, and shifting the scenario database to a CMS would have risked its established reliability. Additionally, we considered solutions for communicating continuous and noncontinuous media between databases to lack the maturity that we needed for ART-SCENE. The database stores and controls access to text-based application information but not richmedia documents. We developed a new, separate application to handle rich-media scenarios, with its own presentation and application layer. Both applications operate on the same database layer that we extended to store richmedia documents separately in the server’s file system. Relational databases provide two mechanisms for storing rich-media documents: storing the image as a BLOB (Binary Large Object) in a database field or storing a pointer to a file location on the server’s file system. For ART-SCENE, we chose the second option to keep the access database small and avoid reducing its performance from storing many large BLOBs. However, one potential disadvantage of storing file pointers was that file locations can change. So, ART-SCENE locates files dynamically rather than storing file path names directly in the scenarios database. To support rich-media selection, we also added a table called MediaSelection to store the media selection rules presented in table 2. The media allocation component implements the media selection guidance. It uses the information-type selection rules in table 1 to recommend the information-type for scenario events. It then gives the analyst advice based on the media selection rules in table 2 to design each rich-media scenario. Recent experiences with ART-SCENE to discover requirements for a hospital system with rich-media scenarios provided valuable feedback. The analyst generating the scenarios encountered problems when uploading large doc-

w w w . c o m p u t e r. o r g / s o f t w a r e

uments produced using ART-SCENE media selection guidelines and used third-party tools to reduce their size. The analyst also encountered some performance problems during scenario walk-throughs. In light of these experiences, we’ll add data compression facilities and, in the longer term, extend ART-SCENE with media data stores to create a federated management system once more sophisticated communication controls are available.

How it all works We demonstrate ART-SCENE with one scenario that we originally generated to specify DMAN, a complex sociotechnical system for sequencing and managing aircraft departures from major airports. DMAN involves human actors, such as tower controllers and aircraft pilots, who interact with computer-based systems related to both airport and air movements. The scenario is GivePushbackClearance—that is, everything that happens to an aircraft or within the air-traffic control tower, from the request to push-back to the aircraft pushing back.13 We extended the scenario with rich media from a film company. Figure 3 shows how ART-SCENE supports adding rich-media documents to a scenario. The analyst uses the edit-scenario-event function shown in figure 1 to learn about the information and media types available and then to add rich-media documents. The analyst can select from the available information types using the pull-down menu. Figure 4 shows the rich-media scenario version used during scenario walk-throughs. The analyst can access rich-media scenarios using a new menu item in the left pane. When the requirements analyst clicks on the Enable Multimedia feature, a camera icon to the right of the event description indicates all normal course events with associated rich-media fragments. Clicking on this icon opens a pop-up window that shows the associated rich-media files over the main scenario page. Figure 4 shows the View Image Files window for the event “The ground air-traffic controller looks at the nearby aircraft and traffic.” All images in the window are thumbnails, which the user clicks to obtain an image’s real size. Figure 4 demonstrates how rich-media scenarios assist requirements discovery. Event 9 (“The ground air-traffic controller looks at the aircraft and nearby traffic”) describes a physi-

Figure 3. Selecting and uploading rich-media documents to ART-SCENE scenarios.

cal human action. ART-SCENE shows three images of the controller viewing aircraft and traffic—images that might help discover additional requirements about the DMAN system’s accessibility, usability, and integration into its physical environment. Analysts can also obtain an overview of all rich-media files linked to the scenario using a slide show feature that displays and rotates the files in the event sequence. This way, they can see all image files for the scenario GivePushbackClearance as a slide show.

Experiences with rich-media scenarios We evaluated the effectiveness and usability of ART-SCENE’s rich-media scenarios for requirements discovery. We asked four experienced requirements analysts to discover requirements for DMAN using one scenario that we presented in two forms: the text scenario only, then the combined text and rich-media scenario. All four analysts discovered an additional 24 requirements using the rich-media scenario in addition to the 21 requirements they discovered earlier using the text version (al-

though the task duration was short and number of requirements they discovered wasn’t statistically significant). One air-traffic controller who evaluated ART-SCENE reported that rich-media scenarios can potentially resolve requirements problems that arise from different working practices, such as in different airports. He said that scenarios that include rich-media representations of these working practices can potentially communicate differences and avoid problems. Of course, our domain choice might have influenced these results. Air-traffic controllers solve spatial problems, and we might expect them to react positively to rich-media images of their work space. Although these images differ from the spatial representations that controllers work with, we’re investigating how developers used ART-SCENE’s rich-media scenarios to discover requirements for a hospital system with stakeholders who didn’t routinely solve spatial problems.

W

e’ve learned several lessons from our experiences with rich-media scenarios.

September/October 2005

IEEE SOFTWARE

95

Figure 4. Image extensions to the GivePushbackClearance scenario.

96

IEEE SOFTWARE

First, simple guidelines and implementations can deliver a lot. We built ART-SCENE on simple directories of rich-media files crossreferenced with a relational scenario-content database. To use this tool, developers follow simple rules to select and produce media types. The result is Web-enabled support for walking through rich-media scenarios that can improve requirements discovery. Second, treating scenario development as a multimedia design problem lets analysts use guidelines that software engineers normally don’t have. It demonstrates how they can apply other disciplines’ techniques, such as those from human-computer interaction and cognitive psychology, for real and immediate benefit. Finally, scenarios provide an agenda and structure for capturing rich-media data. More traditional observational processes and tools,

w w w . c o m p u t e r. o r g / s o f t w a r e

such as AMORE,8 provide little guidance about what to capture and use during requirements processes. As a result, data sometimes overwhelms analysts. Structured text scenarios and media selection guidelines provide an agenda for what to record and how to record it. Analysts can use our selection guidelines and implementation solutions to design richmedia scenarios for many domains that include physical and observable phenomena, including business applications and safety-critical systems domains. In future work, we aim to improve our understanding of rich-media scenarios’ impact on requirements discovery with eye-tracking equipment to record which scenario elements trigger requirements. We will also refine rich-media scenarios in ART-SCENE into smaller fragments to make a scenario’s information more reliable.

References 1. K. Weidenhaupt et al., “Scenario Usage in Systems Development: A Report on Current Practice,” IEEE Software, vol. 15, no. 2, 1998, pp. 34–45. 2. I. Jacobson, G. Booch, and J. Rumbaugh, The Unified Software Development Process, Addison-Wesley, 2000. 3. D. Harel and R. Marelly, Come, Let’s Play: ScenarioBased Programming Using LSCs and the Play-Engine, Springer-Verlag, 2003.

About the Authors Konstantinos Zachos is a PhD student in requirements engineering at City University, London. His research interests include developing effective service discovery mechanisms and techniques. He received his diploma in computer science from Rheinisch-Westfälischen Technischen Hochschule Aachen, Germany. Contact him at the Centre for HCI Design, City Univ., Northampton Square, London EC1V OHB, UK; [email protected].

4. J.M. Carroll, Making Use: Scenario-Based Design of Human-Computer Interactions, MIT Press, 2000. 5. C. Rolland, C. Souveyet, and C. Ben Achour, “Guiding Goal Modeling using Scenarios,” IEEE Trans. Software Eng., vol. 24, no. 12, 1998, pp. 1055–1071. 6. A. Mavin and N.A.M. Maiden, “Determining SocioTechnical Systems Requirements: Experiences with Generating and Walking through Scenarios,” Proc. 11th Int’l Conf. Requirements Eng., IEEE CS Press, 2003, pp. 213–222.

Neil Maiden is a professor of systems engineering at City University, London. His research interests include requirements engineering and scenario-driven approaches to software development. He received his PhD in computer science from City University. He’s a member of the British Computer Society. Contact him at the Centre for HCI Design, City Univ., Northampton Square, London EC1V OHB, UK; [email protected].

7. A.D. Baddeley, Human Memory: Theory and Practice, Lawrence Erlbaum Associates, 1990.

Amit Tosar is a business analyst at M&G Investments, where he works on business

8. D.P. Wood, M.G. Christel, and S.M. Stevens, “A Multimedia Approach to Requirements Capture and Modelling,” Proc. 1st Int’l Conf. Requirements Eng., IEEE CS Press, 1994, pp. 53–56.

system requirements for fixed-income and derivate-based products. He received his bachelor’s degree in business computing systems from City University, London. Contact him at M&G Investment Management Ltd., Laurence Pountney Hill, London EC4R 0HH, UK; amit.tosar@ mandg.co.uk.

9. P. Haumer et al., “Bridging the Gap between Past and Future in RE: A Scenario-Based Approach,” Proc. 4th IEEE Int’l Symp. Requirements Eng., IEEE CS Press, 1999, pp. 66–73. 10. K. Zachos and N.A.M. Maiden, “ART-SCENE: Enhancing Scenario Walkthroughs with Multi-Media Scenarios,” Proc. 12th IEEE Int’l Conf. Requirements Eng., IEEE CS Press, 2004, pp. 360–361. 11. N.A.M. Maiden, “Systematic Scenario Walkthroughs with ART-SCENE,” Scenarios, Stories, and Use Cases, I.F. Alexander and N.A.M. Maiden, eds., John Wiley & Sons, 2004, pp. 161–178. 12. P. Faraday and A.G. Sutcliffe, “Providing Advice for Multimedia Designers,” Proc. ACM SIGCHI Conf. Human Factors in Computing Systems (CHI 98), ACM Press, 1998, pp. 124–131. 13. N.A.M. Maiden et al., “Integrating Creativity Workshops into Structured Requirements Processes,” Proc. 12th Int’l Workshop Deep Inelastic Scattering (DIS 2004), ACM Press, 2004, pp. 113–122.

We publish IEEE Software as a service to our readers. With each issue, we strive to present timely articles and departments with information you can use. How are we doing? Send us your feedback, and help us tailor the magazine to you!

Write us at

@computer.org For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

September/October 2005

IEEE SOFTWARE

97