Evaluation Criteria for Information Systems Development Methodologies

3 downloads 46 Views 325KB Size Report
Jan 1, 2005 - Criteria for evaluating information systems development methodologies ... more limitations associated with SDLC approach. Tools supporting ...
Association for Information Systems

AIS Electronic Library (AISeL) AMCIS 2005 Proceedings

Americas Conference on Information Systems (AMCIS)

1-1-2005

Evaluation Criteria for Information Systems Development Methodologies Keng Siau University of Nebraska-Lincoln, [email protected]

Xin Tan University of Nebraska - Lincoln, [email protected]

Follow this and additional works at: http://aisel.aisnet.org/amcis2005 Recommended Citation Siau, Keng and Tan, Xin, "Evaluation Criteria for Information Systems Development Methodologies" (2005). AMCIS 2005 Proceedings. Paper 509. http://aisel.aisnet.org/amcis2005/509

This material is brought to you by the Americas Conference on Information Systems (AMCIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in AMCIS 2005 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact [email protected].

Siau and Tan

Criteria for evaluating information systems development methodologies

Evaluation Criteria for Information Systems Development Methodologies Keng Siau Department of Management University of Nebraska –Lincoln Email: [email protected]

Xin Tan Department of Management University of Nebraska –Lincoln Email: [email protected]

ABSTRACT

The evaluation of information systems development methodologies is becoming increasingly important. The lack of a commonly acceptable set of criteria for evaluation, however, hinders aggregating knowledge from field studies. Our study attempts to fill the gap. To reduce the effects of subjectivity, we surveyed the opinions of a group (28) of experienced IS researchers. Fifty-one criteria were generated from the research participants. A systematic content analysis technique will be applied for grouping the unique criteria into general categories. Upon completion, our study can make both academic and practical contributions in evaluating ISD methodologies. Keywords

Information systems development methodologies, evaluation criteria, brainstorming, content analysis INTRODUCTION

Many different information systems development (ISD) methodologies exist. Avison and Fitzgerald (1995) acknowledge the proliferation of ISD methodologies and refer to this as “the methodology jungle.” It is evident that considerable attention of practitioners and researchers has been devoted to the development of ISD methodologies. However, the evaluation of existing ISD methodologies is not keeping the pace with the rapid growth of ISD methodologies. The dearth of research on ISD methodologies evaluation has critical ramifications. By failing to evaluate currently used ISD methodologies, organizations may not accurately comprehend the usefulness and effectiveness of the ISD methodologies. In addition, the lack of ISD methodologies evaluation inhibits practitioners and researchers to appreciate the strengths and weaknesses of various methodologies, which are critical knowledge for designing more effective and useful methodologies. Therefore, evaluating ISD methodologies is an imperative task for both practitioners and researchers, and warrants more research. Developing a set of generally acceptable criteria is one of the first steps to developing a systematic process to evaluate ISD methodologies. The objective of our study is to identify a set of generally acceptable criteria for evaluating ISD methodologies. BACKGROUND AND MOTIVATION

To cope with the problems caused by the individualistic approach in early information systems development efforts, many organizations turn to logically appealing ISD methodologies. ISD methodologies offer an engineering-like development discipline, provide explicit deliverables, and safeguard the consistency as information systems are being built. In this section, we first clarify the terminologies related to ISD methodologies. Then, we review the existing IS literature on evaluating ISD methodologies. We discuss the motivation of our study in the end. ISD Methodology

For the purpose of this paper, we offer a definition of an ISD methodology, adapted from Avison and Fitzgerald (1995; 2003) and Lyytinen (1987). An ISD methodology is a systematic approach to conducting at least one complete phase of information systems development, consisting of a recommended collection of phases, procedures, techniques, tools, and

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3132

Siau and Tan

Criteria for evaluating information systems development methodologies

documentation aids. Avison and Fitzgerald (2003) offer an excellent review of the history of ISD methodologies. They split the evolution of ISD methodologies into four eras, as shown in the following table. Era

Years

Brief description

Pre-methodology

From 1960s to 1970s

Computer applications were developed without explicit or formalized methodologies.

Early-methodology

From late 1970s to early 1980s

A dominant ISD approach during this era is the System Development Life Cycle (SDLC).

Methodology

From late 1980s to early 1990s

Numerous new approaches emerged in response to one or more limitations associated with SDLC approach. Tools supporting many methodologies were also developed.

Post-methodology

From late 1990s to present

This era is characterized by a serious reappraisal of the usefulness of the earlier ISD methodologies.

Table 1. The evolution of ISD methodologies Prior work on the Evaluation of ISD Methodologies

The “danger of returning to the bad old days of the pre-methodology era and its lack of control, standards, and training” (Avison and Fitzgerald, 2003, p.79) raised alert among practitioners and researchers alike, who consequently call for more systematic evaluation on ISD methodologies. One of the earliest attempts to evaluate and compare ISD methodologies was the series of conference known as CRIS (Comparative Review of Information Systems Design Methodologies) (Olle et al., 1983; Olle et al., 1982; Olle et al., 1986). However, the endeavor failed to resolve any issues that they set out to achieve, thus not very influential in the practitioner circle. Some field studies have been conducted to evaluate selected ISD methodologies in the natural settings. For instance, Edwards, Thompson, and Smith (1989a; 1989b) conduct a series of field studies to assess the satisfaction level of Structured Systems Analysis and Design Method (SSADM) users. Dekleva (1992) surveys practitioners in 122 organizations in an effort to evaluate the benefits of modern ISD methodologies from the perspective of systems maintenance. More recently, Grant and Ngwenyama (2003) report on an action research study that evaluated the usefulness of a manufacturing information systems development methodology at a manufacturing technology company. While the findings of these field studies may shed some light on the usefulness and effectiveness of certain ISD methodologies, these evaluations that are based on different criteria cannot be aggregated to reach solid conclusions. There are also some studies that are largely based on conceptual analysis and evaluation of ISD methodologies. For example, Nielson (1989), Klein and Hirschheim (1991), and Avison and Fitzgerald (1995) take a similar approach by discussing the strengths and weaknesses of select methodologies according to the criteria that they deem as important. Subjectivity is one of the main criticisms about such a research approach. Motivation of Our Study

The review of prior work presented above highlights a gap in the literature, which must be filled in order to advance the research in this area. First, the empirical evaluations that are based on different sets of criteria are not able to significantly contribute to the aggregated knowledge. Second, even though some researchers have proposed their own set of criteria for evaluating ISD methodologies, these criteria may not be applicable for evaluating currently used ISD methodologies for two reasons. The first reason is that the criteria proposed by individual researchers are prone to the problem of subjectivity (Avison and Fitzgerald, 1995; Siau and Rossi, 1998). The second reason is that the criteria developed in late 1980s and early 1990s may not reflect the emerging trends in ISD evolution, such as business process reengineering (BPR), agile development, Web applications, and object-orientation. Our study attempts to fill this gap. By surveying a group of experienced researchers, we are able to not only reduce the problem of subjectivity, but also identify the relevant criteria that reflect current trend of information systems development. RESEARCH METHOD

In this study, we surveyed a group of experienced researchers on their opinion about the criteria that should be used for

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3133

Siau and Tan

Criteria for evaluating information systems development methodologies

evaluating currently used ISD methodologies. The data collection is through a Web-based electronic brainstorming session. A systematic content analysis technique serves as the guide for our data analysis. Participants

To elicit participants for this study, several messages were posted to listserves related to the area of systems analysis and design, inviting members to participate in our study. The following table provides information on participants’experience in development methodology research. In addition, the participants were from various geographic origins: 13 participants were from North America, 11 from Europe, 3 from Asia/Pacifica, and 1 from South America. The problem of subjectivity is, to a great extent, reduced by the diversified yet experienced participants. Experience description

Mean

Experience in information technology research (in years)

14.4

Experience in development methodology research (in years)

10.6

Experience with structured methodologies (in years)

9.6

Experience with object-oriented methodologies (in years) 4.9 Table 2. Participants’experience in ISD methodologies research Data Collection

Electronic brainstorming is an effective yet structured way to generate lots of ideas regarding a problem domain (Dennis and Valacich, 1993). Electronic brainstorming is capable of overcoming the weaknesses and problems inherent to traditional (face-to-face) brainstorming, namely social loafing, evaluation apprehension, and production blocking (Gallupe, Bastianutti, and Cooper, 1991). In addition, Synergy may come from the pool of ideas exchanged by participants in electronic brainstorming (Nagasundaram and Dennis, 1993). The electronic brainstorming we applied in our study is a Web-based anonymous brainstorming session. Besides offering the advantages associated with electronic brainstorming, Web-based brainstorming enables participants with access to the Web to enter their inputs anywhere and anytime. A World Wide Web site was constructed for our study. The Web site provided continuous information to participants, and allowed participants to add new criteria online. All the inputs to the Web site are anonymous. Figure 1 is a snap shot of the main Web page for the brainstorming session.

Figure 1. Screen shot of the main brainstorming Web page

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3134

Siau and Tan

Criteria for evaluating information systems development methodologies

Data Analysis

Our data analysis follows a systematic content analysis method outlined by Miles and Huberman (1994). In the first round content analysis, each author individually reviewed the whole list of criteria and identified irrelevant or overlapping entries. The second round content analysis is to group the individual criteria into more general categories. At the time of completing this paper, the authors are reviewing evaluation frameworks in the literature in order to reduce the effects of subjectivity in this categorization process. PRELIMINARY FINDINGS

The participants brainstormed 51 criteria over one month period. After the first round content analysis, the authors identified 32 unique criteria for evaluating ISD methodologies (see appendix A). In the process of grouping these diverse criteria into general categories, three categories emerged -- methodology design, methodology use, and methodology deliverables. These three categories represent means and goals of ISD methodologies. Methodology design properties are means to achieve the goals of effective and efficient methodology use, which will lead to ultimate goals of satisfactory methodology deliverables. The three categories are similar to the three-stage model put forth by Jayaratna (1994). He recommends that the evaluation of ISD methodologies should be conducted in three stages. First, before a methodology is adopted, second, during its use, and third, after the intervention (use). Following his recommendation, we suggest that criteria related to methodology design can be used in the first stage for conceptual evaluation; criteria related to methodology use are suitable for the second stage evaluation during the intervention; criteria related to methodology deliverables are appropriate for the assessment of the success or otherwise of the methodologies after the intervention. CONCLUSION

Our study attempts to identify a set of generally acceptable criteria for evaluating ISD methodologies. We concur with Avison and Fitzgerald (1995) that all evaluations subjective in nature and the choice of methodology evaluation framework is a value-laden task. However, research methods can be applied to reduce the problem of subjectivity to a great extent. In our study, we adopted a Web-based anonymous brainstorming method to survey the opinions of a group of experienced IS researchers. In data analysis, two authors followed a systematic content analysis technique. While it is impossible to totally remove subjectivity, our approach greatly reduces the effects of subjectivity on the findings. Upon completion, our study will make significant theoretical and practical contributions. For researchers, our study represents a first step to developing a framework for evaluating ISD methodologies. A systematic evaluation framework may be built upon our findings. For practitioners, our findings provide a set of evaluation criteria that are not severely affected by subjectivity. In their evaluation of ISD methodologies, practitioners may have another option besides the criteria proposed by methodology vendors, which are often purported to be objective. REFERENCES

1.

Avison, D., and Fitzgerald, G. (1995) Information Systems Development: Methodologies, Techniques and Tools, (Second ed.) McGraw-Hill International, Glasgow, Great Britain.

2.

Avison, D., and Fitzgerald, G. (2003) Where now for development methodologies? Communications of the ACM, 46, 1, 79-82.

3.

Dekleva, S.M. (1992) The influence of the information systems development approach on maintenance, MIS Quarterly, 16,3, 355-372.

4.

Dennis, A.R., and Valacich, J.S. (1993) Computer brainstorms: More heads are better than one, Journal of Applied Psychology, 78, 4, 531-537.

5.

Edwards, H.M., Thompson, J.B., and Smith, P. (1989a) Experiences in use of SSADM: Series of case studies, Information and Software Technology, 31, 8, 411-428.

6.

Edwards, H.M., Thompson, J.B., and Smith, P. (1989b) Results of survey of use of SSADM in commercial and government sectors in United Kingdom, Information and Software Technology, 31, 1, 21-28.

7.

Gallupe, R.B., Bastianutti, L.M., and Cooper, W.H. (1991) Unblocking brainstorms, Journal of Applied Psychology, 76, 1, 137-142.

8.

Grant, D., and Ngwenyama, O. (2003) A report on the use of action research to evaluate a manufacturing information systems development methodology in a company, Information Systems Journal, 13, 1, 21-35.

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3135

Siau and Tan

9.

Criteria for evaluating information systems development methodologies

Jayaratna, N. (1994) Understanding and Evaluating Methodologies: NIMSAD, a Systematic Framework, McGraw-Hill, Maidenhead, UK.

10. Klein, H.K., and Hirschheim, R. (1991) Rationality concepts in information systems development methodologies, Accounting, Management & Information Technology, 1, 2, 157-187. 11. Lyytinen, K. (1987) Different perspectives on information systems: problems and solutions, ACM Computing Surveys, 19, 1, 5-46. 12. Miles, M., and Huberman, A.M. (1994) Qualitative Data Analysis: an expanded sourcebook, Sage, Thousand Oaks, CA. 13. Nagasundaram, M., and Dennis, A.R. (1993) When a group is not a group: The cognitive foundation of group idea generation, Small Group Research, 24, 4, 463-489. 14. Nielsen, P.A. (1989) Reflections on development methods for information systems, Office: Technology & People, 5, 2, 81-104. 15. Olle, T.W., Sol, H.G., and Tully, C.J. (1983) Information Systems Design Methodologies: A Feature Analysis, NorthHolland, Amsterdam. 16. Olle, T.W., Sol, H.G., and Verrigin-Stuart, A.A. (1982) Information Systems Design Methodologies: A Comparative Review, North Holland, Amsterdam. 17. Olle, T.W., Sol, H.G., and Verrigin-Stuart, A.A. (1986) Information Systems Design Methodologies: Improving the Practice, North-Holland, Amsterdam. 18. Siau, K., and Rossi, M. (1998) Evaluating information modeling methods - A review, Proceedings of Thirty-first Hawaii International Conference on System Sciences (HICSS-31), January 6-9, Big Island, HI, USA, IEEE Computer Society, Big Island, HI, 314-322.

APPENDIX A: UNIQUE CRITERIA FOR EVALUATING ISD METHODOLOGIES

Criteria Total coverage Consistency in means and fundamentals Having conflict resolution strategies between users Case Tool Support Take into account human cognition Take into account the social aspect Take into account organizational aspect Validation mechanisms Semantic Stability Formal foundation Support for Project Management Support for Project Management Team Communication Quality measurement

Description A methodology should cover the entire systems development process from strategy to cutover and maintenance The philosophical fundamentals of a specific methodology should be reflected by the means, i.e. procedures, techniques, and tools. Almost All organizational information systems are opposed by at least a subset of users. A methodology with a formal mechanism to resolve this would be useful. It is preferable to have CASE tool support for the methodology. Human cognition and limitations should be recognized in the methodology Team and group issues should be recognized and addressed The organization dynamics and issues should be able to be captured and reflected using the methodology. Include ways to validate the correctness of the model with the domain expert Models and systems built around them should be minimally impacted by changes to the business domain that do not alter the meaning of existing semantic structures. Models should be unambiguous, and formally grounded. Wherever possible the models should be executable. A methodology should support management of the IS/IT project, identifying milestones, generating reports and documentations, etc. Should support a team of project managers and analysts in using the methodology -communication support, conflict analysis, schedule conflict analysis, etc. Facilitate and support communication process among the various stakeholders The methodology should provide a list of quality measurement criteria for the input, procedure

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3136

Siau and Tan

criteria Modeling oriented Support group work Support for Creativity and Innovation Flexibility/adaptability Usability Agility Customizability Not vendor controlled Reasonably Priced Web enabled Reusability Continuous Evolution and Enhancement Easily mapped to development environments High quality working system Produce understandable documents Knowledge Base Organizational Memory Accessibility of documentation

Criteria for evaluating information systems development methodologies

(process), and output of software development. Represent and communicate knowledge, which in many (not all) cases are best done using modeling techniques How the methodology best leverage group wisdom is a key. The methodology should provide and support techniques (e.g., brainstorming) for business process innovation and reengineering. Systems development methodologies must be able to adjust to changing technologies and management needs. The methodology should help the developer/designer to specify and develop what he/she wants without the developer/designer having to go through hoops. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Should be able to customize the methodology based on the size, type, etc. of the project The methodology should not be controlled by a single vendor. It should not be too expensive Should be able to use over the Internet or even the wireless Parts of the method should be reusable in other methods or projects The methodology should be enhanced and extended all the time -- getting better and better. While the methodology should be independent of particular programming languages or environments, it should be easy to relate the output to popular development environments and facilitate communication with developers. The methodology should lead to the production of high quality working systems. It should be possible for the user (NOT the developer) to easily understand the documentation produced. The documentation should help the user pinpoint potential problems. Provide a knowledge base of best practices Help in capturing knowledge and sharing knowledge related to systems development and project management in the organization The methodology must be well documented and the documentation must be easily accessible

Proceedings of the Eleventh Americas Conference on Information Systems, Omaha, NE, USA August 11th-14th 2005

3137