Exploring the relationship between IC

9 downloads 36726 Views 3MB Size Report
quite different across organizations in terms of their sophistication, size, location, modus operandi, and the variety of services providcd to cnd-users [17,6].
lnformation & Managemenl 26 (1994) 133- l4l Norlh-Holland

133

Research

Exploring the relationship between IC success and company performance Tor Guimaraes

Introduction

Tennessce Technolocical

Un

ersitt^,

Cookcrille, TN, USA

Magid Igbaria Drcrcl Unircrcity, Phila.lclphia, PA, USA

As the importaocc of end user computing continues to grow, so does the imporlance of supporting and managing rhe related end-user activities. To accomplish this, many organizations have established lnformation Centers. The major hyfx)thesis in this study is that there is a direct relarionship between IC effectivene$s or pedormance level, and two major dependent variables: the benefits or payoffs thal the organizalion has de ved from end-user compuling activities and (2) the organization s business performance. A sarnple of 252 internal auditing directors provided strong evidence for these relationships; furlher. the resulls suggest that the payoffs from end-user compuling vary widely among companies and lhat there is much organizations can do !o improve performance in lhis important area.

Most organizations are expected to continue to increase their end-user computing (EUC) expenditures and the number of microcomputers will increase steadily in the 1990s [7]. On the other

Tor Guimsmes holds lhe J.E. O*en Chair of Excellence in IS at Ten, nessee Tcchnological Univerrity. He has r Ph.D. in MIS from rhc tjniver, sity of Minnesota and an M.B.A. from

California State University, Los An-

geles. He was a Professor and Chairman of the MIS DeDartment at St.

(l)

KL'ytrords: End user computing; Informalk)n center; Critical success factors; Company payoffs; Company performance

Cloud State Universi8, and bcfore that, an Assistant Professor and Di-

He has spoken at

rector of thc MIS Certificate Program at Case-Western Resene Univcrsity. numerous meetings sponsored by profel-

sional organizations including ACM, IEEE, ASM, DPMA. INFOI\,IART, and Sales and Marketing Execulives. He has con\ulled on \everal lS topics *rlh many leading r)rganizations including TRW. American Creelings, AT&T, IBM and lhe Deparlment of Defense. He ha\ published oter cighty articfes in leading pumals such at Inlormarion Srvc,rs-Research, Computtnro tl tln Depth ScLu;nt, MIS Quu arb. Decision Scienccs, Deciion Support Systems, OMEGA, Computers

ond Operutionr Research, Inlormation and Manugement, " Database, Dalabase, Communications of thc ACM. othe:.s. otherc. Magid lgbaria is^nd a Professor of Management Information Syslems

-lnl

at

Drexel Unilersiry. He hold$ a B.A. in Statistics and Business Administralion. and an M.A. in Informalion Syslems anil Oneralions Research fri)m Hebrew University; he received his Ph.D. in management information systems from Tel Aviv Universily. Formerly. he lectured at Tel Aviv

University, Hebrew Univetsity and

Cofteqmndence !o: T. Guimaraes, College of Business Administration. TeDnessee Technological University, Cookeville, TN

38505, USA. (615) 372-3385, E-Mail (BITNET): TG5596 TNTECH.

Ben-Gurion University in Israel. and acted as the administrative dircclor of the Center of Management Information Systems (CEMIS) at Tel Aviv University. Dr. Igbaria has published anicles on management of lS functions, economics of comDuters. comFuler performance evaluation. charging of computer scrvices. coftpumetrical approaches in IS, international IS, and micro, computefs in business in ,{ppi€l Statistics, Communi.ations

of

the ACM, Computers & OpentiorLt Research, D.(ision Sciences. Inlormution & Management, lntemational lournal of Man.Mochinc Studies. Journal of Munagemenr, Joumal of Eneincenng and n'chnokryy Manascm.,nt, OMEGA. MIS eui,tcrb, and others,

878-1m6/94/$07.00 O 1994 Elsevier Science B.V. Att richrs reserved

ssD/

03 78 - 7 206( 93

)E0042-3

134

Inlbtmation & Manlgmcnl

Research

hand, EUC entails both risks and concerns. It has increased the need to manage data, train users, and manage end-user activities [27,1,4,11,32]. As a result of large company expenditures on EUC technology and its expanding role in departmental and corporate-wide computing activities, organizations have been trying to trim and save costs by moving applications into the hands of endusers. lS and other business managers, struggling

with tighter budgets, deteriorating information structures, and escalating facilities costs, have been trying to determinc whether IS dollars are being well spent and whether they are getting the most out of their EUC dollars. The substantial growth of information centers (IC) in the late 1980s is another development that impacts organizational elfectiveness in managing EUC. An IC is established to support EUC activities by providing assistance and training. Its importance is demonstrated by the growing number of organizations that have established these centers [10].

With some exceptions [34,41], much of the literature on the impact of EUC on organization performance and the benefits of IC for supporting and managing EUC are based on the authors' personal opinions or cvidence from a single orga-

nization [12]. Given the great variety of approaches to EUC and IC operations, there is a need for surveys whose results are applicable to a wider set of organizations. Furthermore, there is need for studies where the corporate perspective on the management of information technology is represented by someone other than IS managers. Last, as information systems assume a more strategic organizational role, the 4d lroc assessment of performance commonly used in the past

must be replaced by more systematic measures.

The th€oretical framework

The basic hypotheses in this study are that IC performance is directly related to EUC success and that EUC success is directly related to company performance. As the level of EUC activities in an organization grows, so docs the need for some types of

control (i.e. acquisition policies and procedures, sharing of resources, quality of systems and information) and end-user support [29,38]. Leitheiser

and Wetherbe [30] proposed thc notion of service

support levels as "formal divisions of responsibility between end-users and MIS departments" as the basis for effectively managing EUC in organi-

zations. Thc idea hopefully will lead to sevcral advantages to the organization: freedom of choice for end-user managers, focusing of IS department's attention on providing service to end-users,

reduction of "finger pointing," a structured approach for supporting end-users, incentives for end-users to follow establishcd guidelines and procedures, and better means for coordinating EUC activities.

Starting in the early 1980s and increasing stcadily, a variety of support mechanisms bccamc

available to thc end-user community in many organizations, and the level of support was directly related to the level of control exercised by IS departmcnts [20]. Information Centers wcrc being strongly recommended as necessary firr EUC management and support [14,t8,19]. In those days, however, less than 60 percent of Fortune 500 companies, and less than 8 percent of all US companies, had "established a minimal set of microcompuler policies" [42]. IC

success

Some organizations have already disbanded their ICs, suggesting an alternative way to managing EUC. This implies a need to investigate lC performance and its value to the organization. Undoubtedly, ICs have evolved over time and arc quite different across organizations in terms of

their sophistication, size, location,

modus

operandi, and the variety of services providcd to cnd-users [17,6]. In this study, ICs are very broadly defined to include any group providing support

for microcomputer-based EUC. The most comprehensive and well known attempt at creating a measure for IC performance is the collection of IC Critical Success Factors (CSFs) developed by Magal, et al. [32]. A list of 26 factors werc identified from the literature and factor analyzed to produce the five groups shown in Table L Thesc 26 CSFs werc used to measurc IC success in this study. Company payoffs from EUC or EUC success

Prior research has viewed EUC success from a variety of pcrspectives and has used varying defi-

I nforma I ion

& M unagemen t

T. Cuinaraes, M. Iqbatia

Table I CSFS comprising

lhe five factor groups

Crcup

I:

Commilment lo the IC Concept - Top managcment support Promol€ IC services - Or8anizational acceptance of ICI concept - Commitmcnt of end users to lhe IC concept Carccr paths for IC staff

Group

2:

Quality of IC Support Services A compelcnt staff Support software packages End-user training Rcliability of applications devckrped Standardized hardwar€ and soflware

-

Gtoup

3:

Facilitation of End User Computing

Croup

4:

Training for lC staff Communicati()n wilh users Cosl-effective solutions Atmosphere lbr users Understanding user's business and problems Manage end-user expectations Liaison function wilh end-user departmenls

Role Clarity Provide services to dislribuled sites

-

Define lC mission Uscr underslanJing ot data proces\ing

-

Chargeback criteria

Control proccdures to ensurc standards. policies, elc. are adhered

Group

5:

1o

Cnordination of End-Uscr Computing Priority criteria for work Monitor and coordinale end-user applicalions developmenl Respond to applications requesls System performance

-

nitions and measures of success, e.9., in tcrms of end-user satisfaction [13,25,31], application level of usage [15,26] and system effectiveness [2,23]. These measures focus on individual systems and, unless data can be collected on a represcntative sct of systems per company, are unsuitable to

EUC success from a company-wide perspectivc. Suitable measures for EUC's impact on the organization had to be found elsewhere. Two concepts can effectively reflect the contribution of EUC to thc overall organization and its strategic mission. One represents the extent of improvements in EUC capabilities to support strategic management; the other focuses on how well EUC helps fulfill mmpany objectives. Both assess

reflect the overall success of EUC. Based on an adaptation and integration of the extcnsive litera-

/ IC success

and company

pedormance

135

ture on systems, Venkatraman and Ramanujam [39] conceptualized both dimensions and used them to cvaluate the success of planning systems. They rationalized the two constructs as follows: "While the degree of improvement in the system's CAPABILIIES reflects the 'means' or the process aspect of the concept of planning system success, OBJECTIVES, as a dimension, is intended to tap the 'end' or outcome benefits of planning." This framework has been used for examining the impact of EUC kcy business objectives on EUC capabilities to support the management of an organization, and, concurrently, for measuring the extent to which EUC capabilities and objectives are fulfilled. Individual end-users benefit from information technology. From a company perspective, the indicator for EUC success depends on the dcgree it improves six key business objectives: enhancing management dcvelopment, prcdicting futurc trends, evaluating alternatives, improving short- and long-term performance, and avoiding problem areas. In addition, it is important to examine the degree to which EUC helps in overall strategic

the ends and (or output and process) perspectives for evaluating EUC success. It can bc seen as a

management. These represent mcans

company-wide system that supports efficient and cffective end-uscrs operations and thc strategic managemcnt of the organization. Its capability in

supporting company management is measured along the following twelve dimensions: - anticipating surprises and crises, - identiffing new business opportunities, - identifying key problems,

-

fostering managerial motivation, enhancing the generation of new ideas, communicating top management's expectations throughout the organizational structure, fostering management control, fostering organizational learning, communicating line managers' concerns to top management,

-

integrating diverse functions and operations, adapting to unanticipated changes, and enhancing innovation. We hypothesize that the EUC capabilities di-

mension has

a direct effect on the

objectives

dimension. This hypothesis is based on the notion that the EUC "means" lead to the EUC "end."

htforma t ion & M a na g.me n I

a wide variety of organizational settings: small

Company performance

and large companies, and different industry types.

Company performance has also been measured in many different ways [e.g.,37,40]. Many authors have used single items to measure com-

Participation was voluntary; the cover lettcr assured confidentiality of the responses and that only summary information would be published.

pany performance, such as company profitability

The survey was accompanied by a postagc-paid envelopc addressed for direct return.

(return on total assets) [35]. Given the wide variety of ways in which information technology may contribute to a company's performance and the importance of content validity for such a significanl measure, we chose a multidimensional scale. Gupta and Govindarajan [21]proposed l2 indicators for assessing company performance: sales growth rate, market share, operating profit, rates of profit to sales, cash flow from operations, ROI, new product development, new market development, R and D activities, cost reduction program, personnel dcvelopment, and political/ public af-

fairs. These indicators were used company performance in this study.

to

measure

Method

Oterriew of the sun'ey procedure

This study was designed as a comprehensive of EUC problems and payoffs in busincss organizations. A questionnaire was developed for collecting data from a large group of internal auditing directors (lAs). They wcre asked to gauge the extent of their EUC problems and payoffs. IAs werc chosen as respondents; they generally serve two roles - as end-users and as auditors who are likely to be aware of the problems and benefits from EUC activities. Furthermore, the group is relativcly homogeneous, and this strengthens internal validity of the data collection instrument. We fclt that a survey of end-users or their support personnel would have greater likclihood of bias. The survey qucstionnaire was pretested for content validity by six IAs before it was finalized. Several meetings were held to refine the instrument, particularly to examine its readability and reflect the auditors' preferred phraseology. This process improved the wording of many questions. The final form was distributed to the IAs of 950 organizations randomly selected from a list of approrimately 4,000 members of an Intemal Auditors Association. This sample should represent survey

Sample desciption

Through the procedure just described, of the 950 IAs selected, 252 surveys (26.5 percent) were returned. This appears to bc consistent with other mail surveys [24,16]. However, it was still neces-

sary to determine the represcntativcness of the sample. Chi-square tests were used with a sample

of non-respondents to check for the possibility of non-response bias. Thc results of this test support the conclusion that the companies in the samplc are quite similar to those in the total population with respect to size and type. Thc demographics of the responding firms are presented in Tqble 2. Relatively large IS budgets (over $25 million) were reported by 78 percent of the respondents. In addition, more than half of the firms had more than 2,300 personal computcrs; and nearly 25 pcrcent had more than 6,500 PCs. Most of the EUC support activitics arc provided by the IS dcpartment. They include help desk. coaching, training. maintenance. communications support, and other activities. It was also reported that the user department of most of the responding firms was responsible for backup and recovery and in some cases for coaching and

training.

Measurement and data analysis The internal consistency rcliability of the scales used in this study was rneasured with the Cronbach's alpha coefficient [8]. The alpha coefficient

is the basic formula for determining reliability based on internal consistency, and is the most widely used index of internal consistency reliability [9,28]. Cronbach's alpha has bccn found to be a lower bound to the true reliability, i.e., alpha is a conservative estimate of the reliability of the scale [3]. Thc high alphas of the five factors provide confirmation of the homogeneity of the

iterns comprising them, and indicate acceptable

In[onnut ion & Matngemcnl

Table

T, Cuimatues, M. Igboria

2

Demographic characteristics of the participating organizations

(n:252) Indusrry

Percentage of Organizations

Manufacluring Insurance

| 1.9

Multi-induslry

10.3

Banking Wholesaling

8.3

5.2

Ulililics Transporlation Mining Food-Processing Dcpartment St()res

Distribulion

oil Olh()rs Revcnues

0-40M 50-100M l(x)-200M

4.4 3.2 2.8 2.0 2.0 8.6 Percentage of Organizations 2.4 2.0

200 500M

lB

500M-

l-28

13.0

20.7 26.0

2-58

5-l0B Ovcr l0B

15.4 7.9

MIS Budget 0- lM

Percentage of Organizations

l-25M

18.5

25-50M

14.,1

50 rmM

-10.6

More than l{X)M

12.1

3.8

levels of reliability. Psychometricians have suggested that in the early stages of research on hypothesized measures of a construct, reliabilities of 0.70 or higher are adequate [33], whereas for widely used scales, the reliabilities should not be

Table

/

IC

success unA company

performance

131

below 0.80 [5]. In this study, the estimated alpha

coefficients were considered adequate and acceptable since all exceeded 0.80. The use of previously validated measures is methodologically a major strength of this study. The difference in scales range, I to 7 for lC success and 1to 5 for EUC success and for

company performance, does not represent a problem. Furthermore, to change a scale would undermine its validity. A potentially more serious problem is the subjectivity of the responses. This is a problem common to all empirical research. Its complete elimination is dependent on the availability of objective measures which in most cases are non-existent, very difficult or impossible to implement in practice. This "common method variance" due to self-reporting bias can be a critical problem if the researchers are not careful with their data collection instruments and process. The risk factors involved are: the use of single-item measures, measures not previously validated andlor which lead to biased data. Two of the most relevant articles on this issue arc by Spector [36] who directly addresses it, and by Huber and Power l22l who provide a very useful discussion of what causes inaccurate and biased data, and the many things which can be done to manage the problem. We believe this study has effectively dealt with this problem through the use of previously validated multi-item scales and through the re-testing and corroboration of scale validity. Further, the relatively large standard deviations suggest that responses showed no systematic bias toward the center of the scales or indicate no unduly favorable views about the Darticipating organizations.

-3

Inlercorrelations among rating for lhe five critical succcss factors of IC S.D,

L Clommilment to the IC concept

4.r5

L09

2. Quality of IC support services 3. Frcilitation of €nd user computing

4.24 4.13 3.89 3.96

Lll

4. Role clarity 5. C(x)rdination of end user computing

All correlations arc significant at p