DOI

24 downloads 2885 Views 276KB Size Report
maturity models in the domain of IOS with a special focus on the ... find existing assessment methodologies that can be ... 2013 46th Hawaii International Conference on System Sciences ... between the call for an intersubjectively verifiable.
2013 46th Hawaii International Conference on System Sciences

Assessment Methodology for a Maturity Model for Interorganizational Systems – The Search for an Assessment Procedure Norbert Frick University of Koblenz-Landau [email protected]

Tim Florian Küttner University of Koblenz-Landau [email protected]

Abstract

organization’s needs originates, for example, from inherent conflicts between the integration partners [36], a lack of management support and technical infeasibility [21] or information transparency problems [47]. Academia and practitioners alike have tried to address this problem by developing different kinds of theoretical assumptions and models for the classification and assessment of interorganizational systems [42]; [51]. Prominent examples of these attempts are so-called maturity models. They typically “[…] consist of a sequence of maturity levels for a class of objects” [4] which outline an evolutionary path from a bottom stage of maturity to the highest level of maturity. Despite their differing purpose, for example as a tool for continuous improvement [1] or as means for the assessment respectively benchmarking [16], they classify and assess institutional, organizational and/or technical capabilities of an organization or information system that provide certain beneficial effects according to the corresponding maturity level. By definition, these model types should be sufficient to support organizations in the assessment of their maturity level and capabilities to conduct interorganizational integration by a) the identification of beneficial effects corresponding to each maturity level and b) the enactment of necessary measures to overcome existing impediments preventing interorganizational activities. However, the literature criticizes that many model approaches lack a sound underlying theoretical basis and a holistic view of all relevant maturity issues of a domain [35]; [5]. This ultimately leads to a collection of IOS-related maturity models that differ in many aspects e.g. scope, granularity or applicability. Our overall research attempts to overcome these deficiencies by developing a maturity model for interorganizational integration that is based on a rigorous development strategy (cf. chapter 3) and a solid empirical database. The results presented in this paper merely reflect the overall design process in our research project but focus on a very important part within it, namely the

The benefits of interorganizational systems (IOS) can often not be realized due to unknown or undiscovered impediments, technical deficiencies or lacking organizational capabilities. Maturity models are prominent examples for the assessment of organizations and their capabilities for interoperability. Nevertheless, they are often subject to critiques with respect to their purpose of use, model evaluation, general structure and assessment methodology. This research paper analyses 43 existing maturity models in the domain of IOS with a special focus on the assessment methodology. The goal is to find existing assessment methodologies that can be adapted to a new maturity model for IOS. The findings indicate that there is no common or established assessment methodology present but several assessment types. We propose an assessment method matrix for classification and allocation of assessment methods and types according to Model Complexity and Subject Complexity to determine an assessment methodology.

1. Introduction The second most significant IT priority according to a recent survey of 1,500 CIOs in 30 industries and 48 countries are enterprise applications [33]. Still, the benefits contribution of IT investments to the organization proves to be uncertain [29]. The juxtaposition of both statements reveals a common conflict in everyday business management which especially reflects in interorganizational IT implementation projects [47]: well-known and identified beneficial effects of interorganizational activities (e.g. [24]; [20]; [12]) cannot be realized due to impediments hindering organizations engaging in a business-to-business (B2B)-relationship (e.g. [26], [30]). The struggle with the implementation of an interorganizational system that is optimized for an 1530-1605/12 $26.00 © 2012 IEEE DOI 10.1109/HICSS.2013.106

Petra Schubert University of Koblenz-Landau [email protected]

273 274

and from piloting innovative ideas and technologies enables continuous process improvement. Probably stimulated by its success, a lot of maturity model approaches for different domains were developed consecutively (cf. [16]; [37] for a broader overview). With more than 150 maturity models [10] in the domain of IS this hybrid of a model description (in terms of defining maturity levels and their evolutionary structure) and method application (in terms of assessment methodology and improvement measures) [34] has become quite popular for academia and practitioners alike. In recent years, the perceived ease-of-use of these models was complemented by a more critical appraisal of the theoretical foundation of maturity models and their applicability (e.g. [10]; [34]; [41]). Typical, but not exhaustive, areas of critiques are: purpose of use, general structure, model evaluation and assessment methodology.

assessment methodology of the maturity model. Our preliminary analysis of a small set of existing maturity models within the IOS-domain showed different assessment strategies depending on the model design and the model use [17]. Further research suggests that there seems to be no overall accepted assessment methodology for a certain type of maturity model [35]. Nevertheless, the assessment methodology itself is a critical component within the model use as it is responsible for the collection of data, on which basis the maturity level is determined. This contradiction between the call for an intersubjectively verifiable assessment methodology [41] and a supposedly inconsistent use of assessment methodologies in practice motivated us to investigate 43 existing maturity models within the research domain of interorganizational systems to answer the following research question: What assessment methodologies are used within IOS maturity models in relation to their model design? The remainder of this paper is structured as follows: Chapter 2 introduces the state of current research. Chapter 3 relates our research steps for this particular design phase with the overall research design of our longitudinal research project. The analysis in chapter 4 discusses the results from the model comparison. Chapter 5 draws important conclusions, limitations and outlines further research.

2.1. Purpose of use The purpose of a maturity model is the structured and formalized guidance through an evolutionary progress by evaluative and comparative measures [10]. To include a broad range of organizations within the potential domain-specific user group of a maturity model, model designers typically provide an abstract description of the maturity levels and their respective assessment criteria. Consequently, many maturity models open themselves to the critique of simplicity. Nolan’s stage model was one of the first models to be subject to this kind of critical scientific discussion [7], [27] that pointed to the pure sequential step-by-step view lacking any aspect of evolutionary change. Ironically, more than 25 years of maturity model development did not quite lead to a less critical appraisal of existing maturity models in retrospective [41]. Another aspect of the purpose of use is emphasized by de Bruin et al. [10]: To meet the intended audience’s needs, model designers have to focus on why the audience wants to apply the model, how it can be applied (given different organizational structures), who will apply the model and what is the possible outcome of the model application. This consideration is important for the later design process, as the design result (the model itself) will always have to find a balance between necessary model granularity and model applicability and understandability. A model that is too simple may leave out necessary information or aspects for the overall maturity assessment, a model that is too complex may limit interest or even produce misleading assessment results.

2. Critical appraisal of maturity models in IS The term maturity has been discussed in the field of information systems (IS) for a long time. Nolan’s stage model for enterprise data processing (EDP) was one of the first attempts where the evolution of an initial stage to a more mature stage for EDP was theorized [38]. Several other model approaches followed for different domains of IS like quality management [14], organizational maturity [6], use of ERP systems [25] or service operations [32] to name a few. The Capability Maturity Model (CMM) is likely to be the most prominent example of a maturity model description and assessment [40]. It was developed by the Carnegie Mellon Software Engineering Institute (SEI) and comprises five stages that organizations go through as they move from an immature to a mature understanding of business processes. Its key assumption is that more mature organizations perform more consistently and are thus more successful. The levels are: Initial, Repeatable, Defined, Managed and Optimizing. Quantitative feedback from the process

275 274

2.2. Model evaluation

2.3. General structure

Maturity models are typical design artifacts [34] that have to undergo some kind of evaluation to show their utility and applicability according to Hevner et al.’s [23] design guidelines. Subject to evaluation can be: the process of model design, the quality and/or the components of the maturity model design product [41]. Looking at many published maturity models in IS, the process of model design is often unclear and its distinct design steps are not mentioned by the authors [4]. That makes it almost impossible to follow the research steps in terms of repeatability, verifiability or completeness. Furthermore, the contribution to the body of knowledge with regard to the design process itself is questionable. Authors like de Bruin et al. [10] and Becker et al. [4] set out to overcome this problem by providing future authors of maturity models with generic procedure models to conduct their model design. De Bruin et al.’s [10] model focuses more on the overall model use stating the phases Scope, Design, Populate, Test, Deploy and Maintain. Becker et al. [4] follow Hevner et al.’s [23] design guidelines to formulate specific design requirements reflecting in the phases Problem definition, Comparison of existing maturity models, Determination of development strategy, Iterative maturity model development, Conception of transfer and evaluation, Implementation of transfer media, Evaluation and Rejection of maturity model. The quality of the design product is even more subject to criticism than the design process. In most cases quality criteria like verifiability, validity, reliability, generalizability, or cost efficiency [13]; [10]; [45] are dealt with in a non-conclusive manner. That does not mean that there is no empirical evaluation of the proposed model but (e.g. depending on the chosen benchmark variables) there can be differing conclusions about a proven validity (e.g. [28]) or a failed attempt to do so (e.g. [15]). This problem becomes more apparent when the overall model design and quality criteria are more abstract. The components of a maturity model are least indicative for a proper evaluation as current research still struggles for a common composition of a “good” maturity model. Suggestions range from a general divide into a domain reference model and assessment model [39] to a more detailed component description consisting of maturity levels, descriptors, level descriptions, dimensions, process areas, activities for process areas and activity descriptions [16].

The successful impact of the CMM tempted authors to adapt its structural build-up to their own maturity models [5]. Consequently, many maturity models in the domain of IS were classified as CMMlike [16]. However, without a proper derivation of their decision for their model structure during the design process the choice for a CMM-like build-up seems arbitrary [41]. Fraser et al. [16] identified further model types as Maturity grids (array-like structure with level- and aspect-related statements), Likert-like (assessment of maturity aspects according to questionnaires) and Hybrids (combination of Maturity grids and Likert-like model structure). Yet, an overall classification attempt of maturity models is rare. Many researchers use implicit structural definitions like Solli-Sæther and Gottschalk [46]: Maturity models typically consist of several stages that are (1) sequential within their evolutionary progress, (2) represent a hierarchical structure that cannot be reversed easily and (3) encompass a broad collection of organizational activities and structures. Although these kinds of definitions include a typical model setup and its generic characteristics, they fail to suffice as a classification scheme covering all types of maturity models and possible domain-related specifications. One of the first classification attempts for the IS domain was conducted by Mettler et al. [35] and based on an analysis of 117 maturity models in IS literature. They identified the following three dimensions that maturity models should include covering all relevant aspects and fulfilling their initial purpose: The first dimension, (1) General model attributes serves mainly as a descriptive part for the model’s assessment. The second dimension, (2) Maturity model design deals with conceptual issues like construction and organization of the model. (3) Maturity model use, as the third dimension, covers the deployment, the suggested assessment and practicality. Each dimension is given a distinct set of attributes that stand for a specific requirement or property of the maturity model. Special emphasis is laid on the Maturity model use as Mettler et al. [35] propose three complementary requirements: Practicality of evidence (how can suggestions for improvement be made), Support of application (how can users be supported during the model assessment) and Method of application (how can the model assessment take place).

276 275

extended the procedure model with one further phase from de Bruin et al.’s [10] procedure model: Maintain. Reflecting on de Bruin et al.’s [10] four questions that have to be considered during the design of such a model (cf. chapter 2.1), it is especially the whatquestion referring to the outcome of the model (Purpose of use, cf. chapter 2.1) that is currently in question at this point in our design process. The assessment methodology and the corresponding assessment methods are critical for any model evaluation (Model evaluation, cf. chapter 2.3). Therefore, we conducted an in-depth analysis of 43 existing maturity models in the domain of IOS to examine applied assessment methodologies/ assessment methods and their applicability for our model design. This research step was performed within our overall design process adapted by Becker et al. [4] and de Bruin et al. [10], namely: Comparison of existing maturity models. Within this design phase it is necessary to reflect on already existing work in the domain of IOS to incorporate already sound and valid research results in the overall design and thereby laying the foundation for the consecutive design phase: determination of development strategy. To conduct a comprehensive analysis of existing assessment methodologies in IOS maturity research we decided to perform a preliminary literature research. Existing guides for a thorough and rigorous literature search are rare and differ in size and scope [50]. Therefore, we decided to base our literature research on a recently developed framework for literature search and analysis by vom Brocke et al. [9]. They distinguish five different phases: (1) Definition of review scope, (2) Conceptualization of topic, (3) Literature search, (4) Literature analysis and synthesis and (5) Research agenda. We chose journal databases as preliminary scope of our search (ACM Digital Library, AIS Electronic Library, EBSCOhost, Emerald, IEEE Xplore, INFORMS, ScienceDirect and SpringerLink) limiting the time period from 1993-2010 (ad 1). Our database search was initiated with chosen keywords from the domain of interest, namely: maturity, maturity model and interoperability (ad 2). The search itself proved to be a bit more difficult as the term maturity was used in many publications but seldom hinted towards a complete maturity model. Additionally, the database research was extended to cover maturity models published in conference proceedings and for practitioners use. Overall, we found 43 maturity models that described and assessed different topics within the domain of IOS (supply chain management, enterprise interoperability in general, data exchange, service-oriented architectures and Web services) (ad

2.4. Assessment methodology Several authors made the observation of missing or insufficient assessment methodologies in existing maturity models (e.g. [16]; [35]; [41]). The main concern is not about the mislead use of the assessment methods themselves but emphasizes the orchestration of these methods, so that an assessment may yield verifiable and valid results. Methods for an assessment can be briefly categorized into human-machine and human-human interaction. The simple self-information process based on available documentation of maturity-related information within the organization can help assessors to gain a preliminary assessment. This can be enriched by questionnaires that are answered e.g. via an online survey tool [10]. These results can be complemented by a face-to-face inquiry (e.g. interviews) or by faceto-face (moderated) workshops respectively focus groups. Another aspect leading to a methodology utilizing the aforementioned methods is the distinction between three types of assessments [16]; [35]: Self-assessment (gathering information about the own organization by internal members), Third-party assessment (extension of the self-assessment by including external assessor specialists supporting the internal staff) and Certified assessment (outsourcing of assessment to certified practitioners). In particular, Self-assessment is subject to critique as it is biased by definition due to the fact that the assessors stem from the internal staff of the assessed organization. Nevertheless, none of these types gives any hint to an assessment procedure that deploys specific methods for a pre-defined purpose to yield verifiable or valid results [41].

3. Adoption of critical appraisal to maturity models for IOS As stated earlier, our overall research attempts to design a maturity model for interorganizational systems that is based on a rigorous development strategy and on a solid empirical database for later evaluation. We adopted Becker et al.’s [4] and de Bruin et al.’s [10] procedure models for maturity model development to follow a design-oriented research approach. Our phases are in line with Becker et al.’s [4] proposed development phases: Problem definition, Comparison of existing maturity models, Determination of development strategy, Iterative maturity model development, Conception of transfer and evaluation, Implementation of transfer media, Evaluation and Rejection of maturity model. With respect to the intended long-term use of our model we

277 276

excerpt of the total number of maturity models due to page limitations.

3). For purposes of the analysis, we decided to use the existing classification framework for maturity models in IS by Mettler et al. [35] as analytical lens with a special emphasis on the model use (General structure) (ad 4). This allowed us to create comparability between all analyzed models as far as a classification was possible given the published content in the papers. For a more detailed description we refer to the following chapter 4. Finally, the proposed research agenda (cf. chapter 5) lines out several important issues concerning assessment methodologies/assessment methods for future research (Assessment methodology) (ad 5).

4.1. General model attributes We examined a total of 43 maturity models, 29 with academic origin and 14 from business initiatives. The period of publication ranges from 1993 to 2009. The literature review confirms the findings of Mettler et al. [35] and de Bruin et al. [10], describing the lack of documentation and classification of existing maturity models. Some publications only provide the model itself (e.g. Types of Supply Chain Evolution), others focus on a mere illustration of the model (e.g. Supply Chain Redesign Capability Maturity Model) or give a brief description (e.g. Levels of Information Sharing between Organizations). Fourteen models could be assigned to the technical integration perspective, 27 to the organizational and 2 to the institutional integration perspective. The overall distribution shows a significantly lower number in institutional maturity research whereas technical and organizational aspects are both well represented. The relevance of these models shows a more interesting aspect, as it does not mainly reflect on the models’ same target audience (business users) but on the fairly high number of business related origins (18) in comparison to models developed in academia (25). From an epistemological point of view the analyzed models represent a broad bandwidth of different theoretical and even philosophical stances ranging from a subjectivist [48] to a positivist worldview [18]. Still, our investigation did not reveal a direct link between the epistemological model background and the use of an assessment procedure.

4. Analysis Overall, 431 maturity models could be identified during our literature research and were analyzed according to Mettler et al.’s [35] classification framework for maturity models in IS. During our analysis we found that a mere comparison of all available models could cause a distorted result as the focus differed significantly (e.g. supply chain management vs. Web services). Accordingly, we decided to perform a pre-classification of the model base. From the literature we derived three main perspectives on interorganizational integration that seemed reasonable: (1) technical, (2) organizational, and (3) institutional (e.g. [30]; [2]; [11]) (cf. Table 1 for an excerpt of maturity models assigned to the three perspectives). Technical integration describes how information is processed and shared electronically within and across organizations. Electronically supported communication eliminates manual workload, prevents false data capture and data redundancies and is thus deployed to save costs and transaction time. Organizational integration refers to the organizational structures and processes, which are put in place to improve the efficiency and effectiveness of the supply chain. Business processes are a key element of organizational design and are considered the core of IT-based value creation. Institutional integration describes the formal and informal agreements, which govern inter-organizational relationships, thereby reflecting the concepts that have been developed by transaction cost theory and the resource-based view. This area describes the governance structures among the players. The following analysis according to Mettler et al.’s [35] classification scheme comprises only an

4.2. Model design The majority of maturity models follows the Maturity grid model type (27) while 16 models use the CMM-like build-up. The Hybrid model type could not be identified in any analyzed model design. Based on this analysis of more than forty models, however, no apparent pattern can be discerned in terms of model design adoption. It remains unclear, why and in what context the developers of the models chose e.g. the Maturity grid structure and discarded the CMM-like structure. It seems almost arbitrary what type of model structure is applied to a given problem domain. See Table 1, below, for the classified maturity models.

1

Please note: Due to page limitations we were not able to include all references for the 43 maturity models. The complete reference information is available upon request via e-mail.

278 277

Table 1. Excerpt of analyzed maturity models according to Mettler et al.ʼs (2009) classification framework Model

Maturity model use

Model design

General model attributes

Attribute Source

Levels of Information Systems Interoperability Han and Wikarski, 2002

Levels of Conceptual Interoperability Model Tolk and Muguira, 2002 Semantic Data Exchange

SME Maturity Model

IT-enabled Collaborative Networked Organizations Maturity Model

Enterprise Contract Management Maturity Model

Contract Management Maturity Model

Benguria and Santos, 2008

Tapia, 2009

Saxena, 2008

Garret and Rendon, 2006

Interoperability for SME

IT & Organization Collaboration

Contracting

Contracting

Domain/ Topic

Exchange

Origin

Business

Academia

Academia

Academia

Business

Business

Target audience

Business

Business

Business

Business

Business

Business

Year of publication

1993

2004

2008

2009

2008

2005

Composition

CMM

M

CMM

CMM

M

CMM

Support of application

Questionnaire Scorecard

N/A

Questionnaire

N/A

ECM MAP (Likert-like Questionnaire)

CMMAT (Likert-like Questionnaire)

Practicality of evidence

Implicit

Implicit

Implicit

Implicit

Implicit

Implicit

Method of application

N/A

Self-As.

N/A

Self-As.

Self-As.

Self-As.

A distribution according to the integration perspective may hint towards a favored use of CMMlike models in the organizational integration perspective (2 models within the technical perspective, 8 in the organizational perspective). With a closer look at the model design, a heterogeneous approach can be observed with respect to the presentation and the development of the maturity models which is independent of their integration perspective. Many articles start with the development of the model and the subsequent application of it in a case study (e.g. LISI), while others analyze the current state of the selected domain first and perform a thorough comparison between developmental stages of different groups. This results in models that serve less

for the future assessment of maturity within that domain, but rather reflect an orderly overview of the status quo (e.g. arcs of integration).

4.3. Model use Only 28 from 43 maturity models mention some kind of method of application in terms of assessment types. Most assessments in these cases are conducted in the form of self-assessment. Some developers mention it explicitly as a way to assess maturity (e.g. IT Service CMM Kwintes Project), in other cases the assessment methodology can only be determined indirectly, e.g. via a case study where the maturity model is applied (e.g. ICoNOsMM).

279 278

Concerning the practicality of evidence all analyzed maturity models offer implicit recommendations. There is hardly any explicit description or proposed activity that hints towards an improvement in maturity. Typical assessment methods are written surveys (e.g. ACMM), email/ traditional mail surveys (e.g. Stages of Process Integration), interviews (e.g. Level of Inter-organizational Information System Development) or workshops (e.g. BPMM). Again, no common pattern can be identified in terms of preferred assessment method for a certain model design or integration perspective. Regarding the support of application in terms of assessment documents, some models provide full coverage of all relevant assessment documentation (e.g. ECM MM), others leave out the documents and emphasize the scope and benefits of the assessment (e.g. LISI). Mettler et al. [35] categorize the support of application (support of assessment) into three distinct areas: no documentation, handbook or textual description and software assessment tool. 14 of the analyzed maturity models have no documentation on the implementation of the assessment at all (cf. Table 2). Only Rohloff [43] refers in his paper explicitly to a software tool to support the assessment, which is not described in detail. In most publications (28) some kind of documentation is mentioned. These range from an elaborate assessment documentation (e.g. the CMM family by the SEI) to a simple assessment description (e.g. Collaboration Continuum) that is available on request.

(cf. Table 3). We included a fourth sub-category (SelfAssessment/ Third Party Assessment) to represent those models that allowed both assessment types. Table 3. Maturity grids favor self assessment, CMM-like models offer a choice

CMMlike Maturity Grid Hybrid

Description

Number of Maturity Models

1

No documentation

14

2

Handbook or textual description

28

3

Software assessment tool

1

3rd Party Assessment

Certified Assessment

7

8

0

0

11

2

0

0

0

0

0

0

The assessment type Self-assessment is applied 18 times. This form of assessment can be found in 11 models based on the design of Maturity grid, because this form of model design can be applied quite easily due to its rather simple setup. The CMM-like maturity models on the other hand have no clear tendency towards Self-assessment or Self-assessment/Third-party assessment. Furthermore, the complete absence of any kind of exclusive Third-party assessment or Certified assessment is noteworthy. As mentioned before, no maturity models were found with the model structure Hybrid. Interestingly, more than half of the CMM-like models leave the choice between the two assessment methods (Self-assessment/Third-party assessment) to the user. The structure of CMM-like maturity models is typically more comprehensive in terms of scale and complexity to maturity models based on Maturity grid or Hybrid structures. Based on its general setup and in particular with regard to the complexity of the original model by the SEI, it would be advisable for an organization to gather some kind of support (e.g. external consultants) for the assessment: Small organizations could be potentially overwhelmed due to labor shortages or cost relevant issues, whereas large enterprises could struggle with the scale and complexity of their own processes. Our observations imply that this consideration is not true for the CMMlike maturity models in the domain of IOS. Especially the models only available for Self-assessment do not offer the same depth of Key Process Areas as the original model design by SEI. So, the more comprehensive a CMM-like model becomes the more configuration possibilities arise for a potential user with respect to the assessment methodology.

Table 2. Support of application of maturity models mainly by handbooks or textual description Support of Application

Self Assessment

Self Assessment/ 3rd Party Assessment

Taking assessment types, assessment methods and assessment documentation into account, there are overall 28 maturity models that provide elements in all three areas. An elaborate or distinct assessment methodology could not be identified. Instead, we used the more abstract categorization of assessment types by Mettler et al. [35] (Self-assessment, Third-party assessment and Certified assessment) to find a relation between assessment type and maturity model design

280 279

Table 4. Assessment method, instruments, execution

4.4 Towards an assessment methodology The non-existing or ambiguous link between maturity models and assessment method represents a challenge both for researchers and practitioners alike, since it limits model applicability. Hence, our aim is to further investigate this link and to provide assistance in navigating from model to assessment method. We do so by combining two aspects of IOS maturity models: Firstly, drawing upon the perspective of Bauer and Stickel [3] we classify IOS maturity models on three levels: institutional integration, technical integration, and organizational integration. The resulting attribute, Model complexity, serves as one of two factors to determine an assessment methodology. Secondly, we propose the criteria of firm size, unit of analysis, and process reach, as an indicator for Subject complexity. The corresponding complexity levels are illustrated in Figure 1, below.

Method Approach Limited Scope In-Depth

Wide-Reach Multi-Method

Model Complexity

Multi-Method

Limited Scope Method

Wide-Reach Method

Subject Complexity

Self-assessment Third-party

Self-assessment or Third-party Third-party or Self-assessment

The research results presented in this paper are part of a longitude research project dealing with the design of a maturity model for interorganizational systems. The research was conducted within the design phase Comparison of existing maturity models based on procedural models for the design of maturity models adapted by de Bruin et al. [10] and Becker et al. [4]. In this particular design phase, we aimed to answer the following research question: What assessment methodologies are used within IOS maturity models in relation to their model design? Based on a literature analysis of overall 43 maturity models in the domain of IOS we found, that o There is no detailed assessment methodology present that outlines an assessment procedure for assessors based on selected assessment methods. o There is no direct relation between an assessment methodology/assessment type and the maturity model design. o Maturity models based on the Maturity grids design structure favor Self-assessment as assessment type. o CMM-like maturity models allow for Selfassessment and/or Third-party assessment based on the users’ requirements and the model depth.

Low Low

Multiple, e.g. case study and survey

Execution

5. Conclusions and limitations

High

In-Depth Method

Exemplary Instruments Interview, limited survey Case study, focus group, interviews, simulation Extensive survey

High

Figure 1. Assessment method matrix For each complexity combination of the matrix, an approach to assessment can be found. For instance, in the upper left quadrant, an organization with low subject complexity and high model complexity will need to set up a project and include expert advice. Hence, to account for and manage model complexity, execution via a third-party assessment is recommended. In contrast, an organization with low subject complexity and low model complexity would conduct a self-assessment with limited scope, using a small number of less time-consuming and complex instruments. See Table 4 for an overview of assessment methods and their corresponding exemplary instruments and recommended execution.

Our hope for an already implemented or established assessment methodology that we can adapt for our maturity model was not fulfilled. There are only rudimentary recommendations or fragments of a methodology that do not comply with Pöppelbuß and Röglinger’s [41] call for a thorough assessment procedure. Instead, we found CMM-like models that allow their users to choose an assessment type according to their requirements and resources. This statement holds especially for more complex CMMlike models. Therefore, the contribution of our research is twofold: (1) For academia, the analysis shows a lack of

281 280

[8] G. Benguria, and I. Santos, “SME Maturity, Requirement for Interoperability”, In: Enterprise Interoperability III - New Challenges and Industrial Approaches (Mertins, K., Ruggaber, R., Popplewell, K. and Xu, X. Ed.) Springer, London, 2008, pp. 29-40. [9] J. vom Brocke, A. Simons, B. Niehaves, K. Riemer, R. Plattfaut, and A. Cleven, “Reconstructing the Giant: On the Importance of Rigour in Documenting the Literature Search Process”, In: Proceedings of the 17th European Conference On Information Systems, 2009, pp. 789801. [10] T. de Bruin, R. Freeze, U. Kaulkarni, and M. Rosemann, “Understanding the Main Phases of Developing a Maturity Assessment Model”, In: Proceedings of the Australasian Conference on Information Systems (ACIS), 2005. [11] D. Chatterjee, A. H. Segars, and R. T. Watson, “Realizing the Promise of E-Business: Developing and Leveraging Electronic Partnering Options”, California Management Review, 48(4), 2006, pp. 60-83. [12] E. Christiaanse, “Performance benefits through integration Hubs”, Communications of the ACM, 48, 2005, pp. 95-100. [13] C. L. Conwell, R. Enright, and M. A. Stutzman, “Capability maturity models support of modeling and simulation verification, validation, and accreditation”, In: Proceedings of the 2000 Winter Simulation Conference (J. A. Joines, R. R. Barton, K. Kang, and P. A. Fishwick Ed.), Piscataway, NJ, 2000, pp. 819-828. [14] P. B. Crosby, Quality is Free, McGraw-Hill, New York, 1979. [15] D. H. Drury, “An empirical assessment of the stages of DP growth”, MIS Quarterly, 7, 1983, pp. 59–70. [16] P. Fraser, J. Moultrie, and M. Gregory, “The Use of Maturity Models/Grids as a Tool in Assessing Product Development Capability”, In: Proceedings IEEE Int. Engineering Management Conference, 2002, pp. 244249. [17] N. Frick, and P. Schubert, “A Maturity Model for B2B Integration (BIMM)”, In: Proceedings of the 24th International Bled eConference, Bled, Slovenia, 2011. [18] M. T. Frohlich, and R. Westbrook, “Arcs of Integration: An International Study of Supply Chain Strategies”, Journal of Operations Management. 19, 2001, pp. 185200. [19] G. A. Garrett, and R. G. Rendon, U.S. Military Program Management: Lessons Learned and Best Practices, Management Concepts, Vienna, VA, 2006. [20] J. Gebauer, and P. Buxmann, “Assessing the Value of Interorganizational Systems to Support Business Transactions”, Journal of Management Information Systems, 4(4), 2000, pp. 61-82. [21] N. Geri, and N. Ahituv, “A Theory of Constraints Approach to Interorganizational Systems Implementation”, Information Systems and e-Business Management, 6(4), 2008, pp. 341-360. [22] Y. Han, S. Tai, and D. Wikarski, Engineering and Deployment of Cooperative Information Systems, Springer, Berlin, Heidelberg, 2002.

rigorous assessment methodology development in most of the investigated models. There is a need for a more structured and analytical approach in future maturity model designs to develop an assessment methodology according to a given set of criteria related to the organizational context. (2) For practitioners, our proposed assessment method matrix provides an assessment method classification according to a set of criteria (Model complexity and Subject complexity). The allocation of assessment methods to their appropriate assessment scope within the organizational context assists in navigating from maturity model to assessment method. Our analysis is limited by the domain-specific focus (a broader view at existent IS maturity models might produce different results in terms of model design distribution or occurrence of assessment methodologies). Furthermore, the number of maturity models is not exhaustive or claims to cover all existing maturity models in the IOS domain. Nevertheless, 43 maturity models provide a good starting point in this research direction. It is still up to future research to find out if a more domain-specific or respectively domain-independent assessment methodology is actually possible: „Producing a maturity model which is both completely rigorous and generic may indeed be extremely difficult.“ [16].

6. References [1] D.M. Ahern, A. Clouse, and R. Turner, CMMI distilled: a practical introduction to integrated process improvement, Addison-Wesley, Boston, 2004. [2] S. Barrett, and B. Konsynski, “Inter-Organisation Information Sharing Systems”, MIS Quarterly, Special Issue 1982, pp. 93-105. [3] S. Bauer, and E. Stickel, “Contributions of IT to the Genesis of Network Organisations”, Proceedings of the 1996 Information Resources Management Association Conference, 1996, pp. 46-52. [4] J. Becker, R. Knackstedt, and J. Pöppelbuß, “Developing Maturity Models for IT Management - A Procedure Model and its Application”, Business & Information Systems Engineering, 1(3), 2009, pp. 213222. [5] J. Becker, B. Niehaves, J. Poeppelbuss, and A. Simons, “Maturity Models in IS Research”, In: Proceedings of the 18th European Conference on Information Systems, 2010, Paper 42. [6] I. Benbasat, A. S. Dexter, and R. W. Mantha, “Impact of Organisational Maturity on Information System Skill Needs”, MIS Quarterly, 4(1), 1980, pp. 21-34. [7] I. Benbasat, A. S. Dexter, D. H. Drury, and R. C. Goldstein, “A critique of the stage hypothesis: theory and empirical evidence”, Communications of the ACM, 27(5), 1984, pp. 476-485.

282 281

[38] R. L. Nolan, “Plight of the EDP Manager”, Harvard Business Review, 51(3), 1973, pp. 143-152. [39] M. H. Ofner, K. M. Hüner, and B. Otto, “Dealing with Complexity: A Method to Adapt and Implement a Maturity Model for Corporate Data Quality Management”, In: Proceedings of the Americas Conference on Information Systems (AMCIS), San Francisco, 2009. [40] M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber, Capability Maturity Model for Software, Version 1.1, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 1996. [41] J. Pöppelbuß, and M. Röglinger, “What makes a useful Maturity Model? A framework of general design principles for maturity models and its Demonstration in Business Process Management”, In: Proceedings of the 19th European Conference on Information Systems, 2011, Paper 28. [42] K. Reimers, R. B. Johnston, and S. Klein, “Toward a Theory of IOIS Variance: A New Framework for Studying Inter-Organisational Information Systems”, International Journal of Strategic Information Technology and Applications, 1(3), 2010, pp. 36-56. [43] M. Rohloff, “Case Study and Maturity Model for Business Process Management Implementation”, In: Business Process Management Proceedings: 7th International Conference, Ulm, 2009. [44] A. Saxena, Enterprise Contract Management: A Practical Guide to Successfully Implementing an ECM Solution, J Ross Pub Inc, Fort Lauderdale, 2008 [45] M. Simonsson, P. Johnson, and H. Wijkström, “Modelbased IT Governance Maturity Assessment with CobiT”, In: Proceedings of the European Conference on Information Systems (ECIS), St. Gallen, 2007. [46] H. Solli-Sæther, and P. Gottschalk, ”The Modelling Process for Stage Models”, Journal of Organizational Computing and Electronic Commerce, 20(3), 2010, pp. 279-293. [47] C. Steinfield, M. L. Markus, and R. T. Wigand, “Through a Glass Clearly: Standards, Architecture, and Process Transparency in Global Supply Chains”, Journal of Management Information Systems, 28(2), 2011, pp. 75-108. [48] R. S. Tapia, “ICoNos MM: The IT-Enabled Collaborative Networked Organizations Maturity Model”, In: Leveraging Knowledge for Innovation in Collaborative Networks. Advances in Information and Communication Technology, 307, 10th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2009, Thessaloniki, 2009. [49] A. Tolk, and J. A. Muguira, “The Levels of Conceptual Interoperability Model”, Fall Simulation Interoperability Workshop Orlando, Florida, 2003. [50] J. Webster, and R. T. Watson, “Analyzing the past to prepare for the future: Writing a literature review”, MIS Quarterly, 26(2), 2002, pp. 13-23 [51] T. Williams, “Interorganisational Information Systems: Issues Affecting Interorganisational Cooperation”, Journal of Strategic Information Systems, 6, 1997, pp. 231-250

[23] A. R. Hevner, S. T. March, J. Park, and S. Ram, “Design science in information systems research”, MIS Quarterly. 28(1), 2004, pp. 75–105. [24] C. P. Holland, “Co-operative supply chain management: The impact of interorganizational information systems”, Journal of Strategic Information Systems, 4(2), 1995, pp. 117–133. [25] C. P. Holland, and B. Light, “A stage maturity model for the Enterprise Resource Planning Systems use”, The DATA BASE for Advances in Information Systems, 32(2), 2001, pp. 34-45. [26] C. L. Iacovou, I. Benbasat, and A. S. Dexter, “Electronic Data Interchange and Small Organizations: Adoption and Impact of Technology”, MIS Quarterly, 19(4), 1995, pp. 465-485. [27] J. L. King, and K. L. Kraemer, “Evolution and organizational information systems: an assessment of Nolan's stage model”, Communications of the ACM, 27(5), 1984, pp. 466-475. [28] W. R. King, and T. S. H. Teo, “Integration between business planning and information systems planning: Validating a stage hypothesis”, Decision Science, 28, 1997, pp. 279–308. [29] P. Love, Z. Irani, C. Standing, C. Lin, and J. Burn, “The Enigma of Evaluation: Benefits, Costs and Risks of IT in Small-Medium Sized Enterprises”, Information and Management, 42(7), 1005, pp. 947-964. [30] M. L. Markus, “Building successful interorganizational systems: IT and change management”, In: C. S. Chen, J. Filipe, I. Seruca, and J. Cordeiro (Eds.), Enterprise Information Systems VII, Springer, Dordrecht, 2006, pp. 31–41. [31] B. Massetti, and R. W. Zmud, “Measuring the extent of EDI usage in complex organisations: strategies and illustrative examples”, MIS Quarterly, 20(3), 1996, pp. 331–345. [32] M. McCluskey, “How mature is your service operation?”, Supply Chain Management Review, 8(5), 2004, p. 17. [33] M. P. McDonald, “Executive Summary: Meeting the Challenge: The 2009 CIO Agenda”, Gartner Executive Program Report, 2009. [34] T. Mettler, and P. Rohner, “Situational Maturity Models as Instrumental Artifacts for Organizational Design”, In: Proceedings of the 4th international Conference on Design Science Research in Information Systems and Technology, Philadelphia, 2009. [35] T. Mettler, P. Rohner, and R. Winter, “Towards a Classification of Maturity Models in Information Systems”, In: Proceedings of itAIS 2009, Verona, 2009. [36] A. Nagy, “Collaboration and Conflict in the Electronic Integration of Supply Networks”, In: Proceedings of the 39th Hawaii International Conference on System Sciences (HICSS), Hawaii, 2006. [37] T. Netland, E. Alfnes, and H. Fauske, “How mature is your supply chain? - A supply chain maturity assessment test”, In: Proceedings of the 14th International EurOMA Conference Managing Operations in an Expanding Europe, Ankara, 2007.

283 282