The effect of performance measurement systems on firm performance ...

7 downloads 87345 Views 1MB Size Report
comprehensive nomological network relating types of PM system uses to organizational capabilities ..... reporting systems, monitoring tools, and performance results being ...... structural model exhibits a good model-to-data fit (2 = 2515.635,.
Journal of Operations Management 32 (2014) 313–336

Contents lists available at ScienceDirect

Journal of Operations Management journal homepage: www.elsevier.com/locate/jom

The effect of performance measurement systems on firm performance: A cross-sectional and a longitudinal study Xenophon Koufteros a,∗ , Anto (John) Verghese a , Lorenzo Lucianetti b a Business Administration Department of Information & Operations Management Mays Business School, Texas A & M University Department of Information & Operations Management, College Station, TX 77843-4217, United States b Department of Management and Business Administration University of Chieti and Pescara, Viale Pindaro 42 - 65127, Pescara, Italy

a r t i c l e

i n f o

Article history: Received 11 December 2013 Received in revised form 13 June 2014 Accepted 17 June 2014 Keywords: Performance Measurement Systems Operations management Accounting Financial performance Empirical Panel data

a b s t r a c t Performance measurement (PM) systems have been popularized over the last 20 years and the operations management literature is replete with discussion of metrics and measurement systems. Yet, a comprehensive nomological network relating types of PM system uses to organizational capabilities and performance is lacking. Furthermore, there is scant empirical evidence attesting to the explanatory efficacy of PM systems as it relates to organizational performance. We view PM system uses through the lenses of the Resource Orchestration Theory (ROT) and explore specific relationships of underlying variables by relying on the Organizational Information Processing Theory (OIPT). Resting on the extant literature, we identify two types of uses which include Diagnostic Use (the review of critical performance variables in order to maintain, alter, or justify patterns in an organizational activity) and interactive use (a forward-looking activity exemplified by active and frequent involvement of top management envisioning new ways to orchestrate organizational resources for competitive advantage) and relate them along with their interaction (i.e., dynamic tension) to organizational capabilities. We further link capabilities to target performance, which subsequently impacts organizational performance (operationalized through both perceptual and objective financial performance measures). The nomological network is tested via a cross sectional study (386 Italian firms) while the efficacy of PM systems to explain organizational performance is examined by using longitudinal panel data approaches over a 10 year period. There is sufficient evidence to suggest that the use of PM systems leads to improved capabilities, which then impact performance. Contrary to the extant literature, however, we discovered that Diagnostic Use appears to be the most constructive explanatory variable for capabilities. On the other hand, in light of a longitudinal study, we also uncovered that Diagnostic Use experienced depreciating returns as far as objective financial measures are concerned. Also, when high levels of Diagnostic Use were coupled with low levels of Interactive Use, they produced the lowest levels of organizational capabilities. Conversely, high levels of both types of PM system use generated extraordinary high levels of capabilities. There is sufficient evidence to suggest that organizations cannot rely merely on Diagnostic Use of PM systems. We also learned that the effects of PM systems (measured via adaptation) fade unless high learning rates are applied. We offer detailed recommendations for future research which have theoretical as well as empirical implications. © 2014 Elsevier B.V. All rights reserved.

1. Introduction The operations management literature has been a strong proponent of metrics and respective PM systems for quite some time. For instance, Bititci et al. (1997) offer a developmental guide to

∗ Corresponding author. Tel.: +1 979 845 2254/fax: +1 979 845 5653. E-mail addresses: [email protected] (X. Koufteros), [email protected] (A.J. Verghese), [email protected] (L. Lucianetti). http://dx.doi.org/10.1016/j.jom.2014.06.003 0272-6963/© 2014 Elsevier B.V. All rights reserved.

construct integrated PM systems and Gunasekaran et al. (2004) proposed a framework for supply chain performance measurement. Melnyk et al. (2004) discuss metrics and performance measurement while Neely (1999; 2005) furnished a treatise on the evolution of performance measurement research in operations management. Later on, Gunasekaran and Kobu (2007) provided a literature review of performance measures and metrics in logistics and supply chain management. While the topic is popular, what is vividly missing from the literature is a judicious examination of how companies actually use PM systems to orchestrate their responses to

314

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

organizational challenges and whether such uses do in fact enhance operational, strategic, and external stakeholder related capabilities and performance over time. PM systems are integral to resource orchestration processes and over the last three decades many organizations have invested enormous amounts of capital, time, and effort developing and implementing such systems. Undoubtedly, one of the most popular paradigms is the Balanced Scorecard, first documented and articulated by Kaplan and Norton (1992). There are nevertheless numerous other measurement frameworks (Bititci and Turner, 2000) that have been proposed and implemented–e.g. the Performance Measurement Matrix (Keegan et al., 1989), the Result and Determinants Model (Fitzgerald et al., 1991), the Performance Pyramid (Lynch & Cross, 1992), and the Performance Prism (Neely et al., 2002). These formal performance systems are utilized as mechanisms that enable organizations to orchestrate their resources more effectively. The Resource-Based Theory (RBT) has long argued that possessing valuable, rare, inimitable, and non-substitutable resources is vital to firm sustained advantage (Hitt et al., 2011; Wowak et al., 2013). But as Hansen et al. (2004, p. 1280) state “what a firm does with its resources is at least as important as which resources it possesses.” Sirmon et al. (2011) similarly note that while possession of resources is essential, the ability of a firm to “orchestrate” its resources is more fundamental as the firm bids to prosecute its strategic objectives. Orchestration of resources however is also subject to top management who mobilizes the vision to direct and use firm resources to achieve objectives (Chirico et al., 2011; Crook et al., 2008). Resource Orchestration Theory (ROT) is an emerging theoretical stream of work which rests on the conceptual work of Sirmon et al. (2007) and Helfat et al. (2007). Hitt et al. (2011) argue that “Resources orchestration is concerned with the actions leaders take to facilitate efforts to effectively manage the firm’s resources” (p. 64). Such actions include for instance the structuring of the firm’s resource portfolio, bundling resources into capabilities, and leveraging the capabilities to create value to customers (Hitt et al., 2011; Sirmon and Hitt, 2003; Sirmon et al., 2007). According to Hitt et al. (2011) and Holcomb et al. (2009), while each action and its particular nuances are vital, the synchronization of actions can contribute positively towards performance. To manage each action and to synchronize the orchestration of resources, leaders rely on mechanisms such as PM systems which yield information regarding the functioning of their resource portfolio and bundle of capabilities (Hitt et al., 2011). Such information is critical for leaders because it enables them to make crucial adjustments to their resources and mobilize requisite resources as conditions change. Melnyk et al. (2004) recognized the orchestrating role of PM systems in operations management and assert that the “performance measurement system is ultimately responsible for maintaining alignment and coordination” (p. 213). Operations management leaders, for instance, need information regarding inventory performance to decide whether additional space to house inventory is necessary in order to pursue a new “same-day delivery” strategy that demands high service levels, such as the expansive Amazon.com Local Express Delivery strategy. Via the diagnostic attributes of a PM system, managers can focus attention on issues of strategic significance, monitor performance, and detect whether the desired service level can be achieved given the current level and blend of resources. In addition, the active and personal engagement of the leadership with performance measurement processes can serve as a catalyst in orchestrating the acquisition and bundling of essential resources and capabilities to meet delivery targets. Melnyk et al. (2004) highlight that metrics and respective PM systems serve “as essential links between strategy, execution, and ultimate value creation” (p. 209). A PM system can also be characterized as a management control system that incorporates a

structured framework specifying key financial and non-financial performance metrics. From a theoretical point of view, a PM system can be described as an ambidextrous system because it incorporates both mechanistic and organic elements. As an orchestration mechanism, the organization can use the PM system to control organizational behavior (alike to a mechanistic use) but on the other hand it can use it to promote organizational innovation and strategic renewal (resembling an organic use). The literature however tends to focus more on the “mechanistic” use of PM systems while the more “organic” use is in general neglected. The mechanistic use is coined as the diagnostic use in the literature and it is primarily responsible for furnishing information. From an organizational information processing theory (OIPT) perspective (Galbraith, 1974), diagnostic use is liable to reduce uncertainty. On the other hand, the organic use can be described as interactive in nature and is deployed by top management to enact debate and reduce equivocality. Simons (1995) acknowledges the complementary nature of the two systems, but only few studies explicitly test relationships between types of uses of PM systems (e.g., Widener, 2007) or specify and account for their interactions (e.g., Henri, 2006a). However, these authors operationalize organizational performance only via perceptual measures rather than objective and longitudinal financial performance measures. Underscoring the importance of PM systems, Kaplan and Norton (2008) made a rather revealing statement suggesting that “We believe that if you don’t measure progress toward an objective, you cannot manage and improve it” (p. 7). From an ROT perspective, firms that deploy their PM systems should be capable of shaping capabilities to meet or exceed target performance. Highlighting and motivating interest in adopting a PM system is, therefore, the claim that organizations with a PM system outperform their counterparts without a PM system (Davis & Albright, 2004; Crabtree & DeBusk, 2008). Unfortunately, this claim is debatable in part because there is only a handful of published empirical research studies investigating the claim (e.g., Ahn, 2001; Chenhall & Langfield-Smith, 1998; Henri, 2006a; Hoque & James, 2000; Ittner & Larcker, 1998; Chenhall, 2005; Widener, 2007) and in part because the extant empirical literature has reported mixed results regarding the effects of PM system usage on organizational performance (Chenhall & Langfield-Smith, 1998; Ittner et al., 2003a). Henri (2006a) surmises that prior work examined the role of PM systems toward strategy implementation and strategy formulation, but concedes there is scant empirical evidence attesting to the professed virtues of such systems. The operations management literature is especially devoid of studies in this respect and much of what is currently published can be ascribed as contributions from the accounting discipline. Henri (2006a) submits that the specific relationship between PM systems and strategy is ambiguous and at times contradictory and attributes such results to differences in definitions and operationalizations of the variables. Pavlov and Bourne (2011) add that the mechanism relating a PM system to performance is poorly understood. Henri (2006a) notes that prior research has specified and tested direct links between PM system usage and performance but the effects may actually be reflected instead by the capabilities that PM systems incite as orchestration mechanisms. Pavlov and Bourne (2011) argue that it has not been demonstrated exactly how PM systems are linked to performance, thus leaving the gap between PM systems and performance still unresolved. As Pavlov and Bourne state, the power of a PM system can be seen as significant and yet somewhat opaque. In other words, there is still a “black box.” Overall, there are three gaps in the literature which merit investigation. The first gap relates to the specific types of uses of PM systems: For what purposes do organizations use PM systems? The second and related gap pertains to the modality by which different types of uses of PM systems impact performance: How do specific

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

types of uses of PM systems actually affect performance? Stated otherwise, there is a lack of a nomological network that relates specific types of uses of PM systems to performance. Exploring the effectiveness of each type of use is important because organizations can direct resources and effort on specific types of uses in order to maximize returns. The third gap concerns the efficacy of PM systems to explain variation in performance: Does the implementation of PM systems benefit the organization financially over time? Both the operations management and the accounting literatures are lacking of empirical studies that demonstrate the financial impact of PM system implementations from a longitudinal perspective. Davis and Albright (2004) examined the impact of PM systems on performance (i.e., key financial measures pertinent only to financial institutions) but the study was constrained to a single firm (i.e., a bank) and several of its branches, and assessed change in performance only across two periods. This manuscript tackles the first two gaps in the literature by identifying salient types of uses and by constructing a nomological network of constructs. Then, based on literature (e.g., Henri, 2006a) that posits that the effects of PM systems on performance are communicated via the capabilities they engender, it examines how each type of use of a PM system impacts organizational capabilities across three dimensions which include operational capability, strategic management capability, and external stakeholder relations capability. It articulates simple effects as well as interactions amongst types of PM system usage, while controlling for an extensive list of demographics and tangible and intangible resources. In essence, we are able to disentangle the orchestrating role of PM systems from the mere possession of resources. Furthermore, the relationship between organizational capabilities and target performance is tested. Finally, the link between target performance and financial performance is tested utilizing both subjective and objective performance measures. To tackle the third gap in the literature, the impact on four financial ratios is studied over time (before and after implementation of a respective PM system) using several methods. Analyses rely on primary data and secondary data over a ten year time horizon. The longitudinal effects of PM system usage are assessed using panel data analysis approaches and Stata12.

2. Literature review & theory development A review of the operations and supply chain management literature suggests that there is no shortage of discussion regarding metrics and PM systems (e.g., Neely, 1999; Bourne et al., 2000; Bullinger et al., 2002; Melnyk et al., 2004; Neely, 2005; Hult et al., 2008; Chia et al., 2009; Lehtinen & Ahola, 2010; Taylor & Taylor, 2013; Garengo & Sharma, 2014). Neither there is scarcity of frameworks for measuring strategic, tactical, and operational performance at the plant level or in a supply chain (Bititci et al., 1997; Bititci & Turner, 2000; Neely et al., 2000; Gunasekaran et al., 2001; Gunasekaran et al., 2004; Gunasekaran and Kobu, 2007; Estampe et al., 2013). The implied assumption is made however that PM systems are by their very nature conducive towards better performance because they can be used to benchmark operations against targets and abet the organization orchestrate resources to enhance future performance. Metrics and performance measurement are seen as critical elements in translating an organization’s mission, or strategy, into reality and for maintaining alignment and coordination (Melnyk et al., 2004). Thus, PM systems are typically ascribed with positive evaluations. The majority of studies that have surfaced in the literature assess PM system effectiveness in relation to overall organizational performance (Braam & Nijssen, 2004; Crabtree & DeBusk, 2008; Davis & Albright, 2004; Ittner, Larcker, & Randall, 2003b), thereby assuming a direct association between the PM system and

315

performance. In this respect, Stede et al. (2006) found that regardless of strategy, organizations with more extensive PM systems, especially those that included objective and subjective nonfinancial measures, have better overall performance. However, the literature suggests that the findings are inconsistent (Bourne et al., 2013; de Leeuw & van den Berg, 2011; Hamilton & Chervany, 1981). At the organizational level, Henri (2006a) and Widener (2007) advocate that a more in-depth understanding of the engendering role of PM systems on organizational capabilities may help to resolve some of the ambiguous findings. Recently, Grafton et al. (2010) explored the process by which PM systems impact the way managers orchestrate their resources via mobilization to achieve competitive advantage and contribute to performance outcomes. Relying on RBT, Grafton et al. (2010) argue that the use of performance measurement information for feedback and feed-forward control influences the extent to which an organization is able to exploit and identify its strategic capabilities, respectively. Modelling capabilities as a mediating variable would allow researchers to explore the processes by which PM systems enhance organizational outcomes. Also, the extant literature describes several PM system types of uses (e.g., coordination, monitoring, diagnosis, problem solving, legitimization, scorekeeping, focusing attention). Simons (1994) work describing types of uses or purposes of PM systems has been distinctively influential in the literature. Simons (1994) effectively covers the domain of PM system types or styles of uses (Bisbe & Otley, 2004), yet he encapsulates them efficiently via only two categories (i.e., diagnostic and interactive uses), which rely on respective diagnostic systems and interactive systems. According to Simons (1994), diagnostic systems are “formal feedback systems used to monitor organizational outcomes and correct deviations from pre-set standards of performance” (p. 170). Diagnostic systems are in essence information systems that reduce uncertainty and thus aid strategy implementation (Bisbe & Otley, 2004). Interactive systems are used “by top managers to regularly and personally involve themselves in the decision activities of subordinates” (Simons, 1994, p. 171). Bisbe and Otley (2004), resting on the work of Simons (1994), suggest that interactive systems help organizations adapt to changes in the environment by orchestrating organizational responses. The modality by which PM systems enable resource orchestration can be explained via the theoretical lenses of the organizational information processing theory (OIPT) (Daft & Lengel, 1986; Galbraith, 1974). As Melnyk et al. (2004) state, “An information processing perspective offers yet another potentially rewarding way to look at metrics” (p. 214). Choi et al. (2013) state that PM systems “function as a framework that organizes the firm’s information environment around its strategy” (p. 106). According to Grosswiele et al. (2012), “Performance measurement aims to provide decision makers with information that enables them to take effective actions and evaluate whether a company is pro˜ gressing in line with its strategy” (p. 1017). Bisbe and Malagueno (2012) suggest that PM systems “. . .support the decision-making processes of an organization by gathering, processing, and analyzing quantified information about its performance, and presenting it in the form of a succinct overview” (p. 297). Organizations however do not always have the requisite information and thus they may face uncertainty and, sometimes, even when they do possess information it may have multiple and conflicting interpretations which creates equivocality (Daft & Lengel, 1986). In order to tackle facets of both uncertainty and equivocality, organizations turned to PM systems which can furnish relevant information and engender debate and enactment of organizational action. This study aims to examine how different types of uses of PM system can fuel organizational capabilities leading to improved organizational performance. Specifically, this research focuses on

316

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Diagnosc Use

Subjecve Financial Performance

Strategic Management Capability

ROTA

Dynamic Tension (Diagnosc X Interacve)

Operaonal Capability

Target Performance

ROA

ROE

Interacve Use

External Stakeholder Relaons Capability EBIT

Fig. 1. Research Framework Note: Effects on ultimate dependent variables are controlled for industry, firm size, geographic diversification, R&D expenditures, Industrial Patents & Intellectual Property Rights, Plant & Machinery, and Industrial & Commercial Equipment. These effects are displayed in Table 4.

the orchestrating roles of PM systems: “diagnostic use” and “interactive use.” According to Henri (2006a), “These two types of use work simultaneously but for different purposes. Collectively, however, their power lies in the tension generated by their balanced use, which simultaneously reflects a notion of competition and complementarity” (p. 531). Fig. 1 displays the research framework. 2.1. Diagnostic PM system use Diagnostic Use can be defined as a set of formalized procedures that use information to maintain or alter patterns in an organizational activity (Henri, 2006a). These procedures may include reporting systems, monitoring tools, and performance results being communicated to all staff. Also, such procedures may describe a conceptual model for understanding drivers of organizational success or how excellence may be attained (i.e., success map or strategy map), or evaluate critical success factors to monitor performance and specific targets. Widener (2007) states that diagnostic use motivates employees not only to perform but also to align their behavior with organizational goals. Diagnostic use comes in many flavors however. In this paper, we focus on three rather comprehensive diagnostic uses which are tasked to furnish information and aid resource orchestration: “monitoring”, “focusing attention”, and “legitimization.” Monitoring enables managers to track performance against a plan and can identify what, if any, is going awry (and needs correction) (Henri, 2006a,b). The monitoring element of diagnostic use enables managers to benchmark against targets (Widener, 2007). For instance, top management may set goals for customer service level. Monitoring can help them establish whether goals are met and pinpoint where performance is lacking. Focusing attention generates information which is pertinent in directing organizational members on issues of strategic significance. It draws interest on common issues, critical success factors, and integration. Widener (2007) further notes that focusing attention sets boundaries or constraints on employee behavior, alike to the boundary system advocated by Simons et al. (2000). By focusing attention on critical success factors, such as customer service level top management can promote unity of effort across organizational units such as purchasing, operations, and distribution since many constituents influence the capability of the organization to obtain and sustain high levels of customer service. Using a PM system for legitimization has the intention of justifying and qualifying actions and ensuring they are aligned with strategy. In this sense, the legitimization process generates information to enhance the rightfulness and credibility of decisions (Henri, 2006b) to heighten the legitimacy of current and future organizational action. Legitimization can be seen as a political tool

(Henri, 2006b) that management can use to assess whether a specific action was judicious. One of the managers we interviewed for this study commented that the methodological validity or the legitimization of the planning process can only be ascertained if it is verifiable ex post. Demonstrating that decisions are credible can positively impact stakeholders’ support and heighten trust in management, which should positively impact the ability to meet or exceed performance targets. Top management can use the PM system to obtain data regarding customer service level to legitimize strategic choices and garner employee support and commitment. The three specific purposes are in essence the nuances that reflect the diagnostic component of PM system use. Diagnostic Use can be thought of as a higher-order variable composed of monitoring, focusing attention, and legitimization. Monitoring ensures that managers know when standards are not being met and thus can make corrections; focusing attention warrants that all levels of the organization remain on the same page and in communication about where the organization is headed while legitimization attests that those decisions are justified and are well-grounded in the organization’s business strategy. Diagnostic use can be used as a tool to quantify actions (Neely et al., 1995) across a variety of metrics. Henri (2006a) suggests that these metrics can assume a variety of orientations such as being financial or non-financial, internal or external, short or long term as well as ex post or ex ante (Franco-Santos et al., 2012). The diagnostic use of PM system also “reflects two important features associated with mechanistic controls: (i) tight control of operations and strategies, and (ii) highly structured channels of communication and restricted flows of information” (Henri, 2006a, p.7). This formal use of PM system provides a mechanistic approach to decision making which furnishes signals when issues such as productivity and efficiency have fallen (Henri, 2006a) outside expectations. However, as a mechanistic control mechanism, diagnostic use, as Henri (2006a) states, has been associated with several dysfunctional behaviors in terms of smoothing, biasing, focusing, filtering, and illegal acts (Birnberg et al., 1983). “These distortions constitute defensive routines that aim to reduce potential embarrassment or threat, or to improve personal interest” (Henri, 2006a, p.7). Given some of the potential for adverse effects, Henri (2006a) in fact labels diagnostic use as a negative force and even hypothesizes a negative relationship between Diagnostic Use and organizational capabilities. It is noted however that Henri (2006a) operationalized Diagnostic Use by measures reflecting only monitoring. Unlike Henri (2006a), we suspect diagnostic use may have a net positive contribution especially in the presence of high interactive use. Diagnostic use is critical because it furnishes information regarding deviations, positive or negative, against expectations. While firms tend to focus on negative deviations rather than

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

positive deviations, the PM system itself is quite adept and furnishes valuable information that can reduce uncertainty and thus improve organizational capabilities. A negative deviation prompts action to steer the organizational energy and resources towards the targets while a positive deviation can be utilized to reinforce organizational efforts. PM systems via diagnostic use can also furnish information that can be consumed to improve relationships with stakeholders. Information generated through PM systems can be pertinent to customers who evaluate suppliers for future business opportunities. Such information may also be germane to the market, which determines the valuation of the firm and its attractiveness to potential investors. Agreeably, diagnostic use may somewhat limit the role of a PM system to a measurement tool (Henri, 2006a), but even as a tool it is nevertheless essential because it plays a significant role in orchestrating resources, especially when deployed concurrently with interactive use. 2.2. Interactive PM system use The interactive use of PM systems tends to have a more developmental role and represents a positive force (Henri, 2006a) employed to expand opportunity seeking and learning throughout the organization. Using a PM system interactively forces dialogue and stimulates the development of new ideas and initiatives (Henri, 2006a; Widener, 2007). According to Henri (2006a), when a PM system is used interactively, the information generated is a recurrent and important agenda item for senior leadership; frequent and regular interest is fostered throughout the organization; data are discussed and interpreted among managers from different hierarchical levels; and “continual challenge and debate occur concerning data, assumptions, and action plans” (Henri, 2006a, p.5). Bisbe and Otley (2004) state that senior managers, through their personal involvement and commitment, can use interactive systems to orchestrate responses to environmental perturbations and craft subsequent organizational renewal. Employing a PM system from an interactive use perspective can generate pressure across organizational levels and motivate further information gathering while stimulating debate. Interactive use embraces institutional dialogue and debate and stimulates information exchange and communication (Henri, 2006a), which play a vital role in reducing equivocality. In addition, interactive PM system use equips the organization with two attributes. It expands the organization’s information processing capacity and engenders interaction amongst constituents (Henri, 2006a), leading to more eloquent orchestration of resources. For example, top management can use the PM system to critically evaluate discrepancies between targeted and actual customer service levels. Top management gets intimately involved with the use of the PM system to analyse root causes and orchestrate necessary resources to achieve targets. Henri (2006a) labels interactive use as a “positive” force suggesting that interactive use promotes opportunity seeking behavior and learning across organizational levels. According to Henri (2006a), the interactive use of a PM system expands its role to a strategic management tool. 2.3. Dynamic tension – diagnostic and interactive PM system roles PM systems have two complementary and interdependent roles (Mundy, 2010) which are information based: (1) to exert control over the attainment of organizational goals and (2) to enable employees to search for opportunities and solve emergent problems via top management commitment and involvement (Ahrens & Chapman, 2004; Chenhall & Morris, 1995; Simons, 1995; Zimmerman, 2000). These roles necessitate a balance between taking actions congruent with the organization’s goals while also

317

giving employees sufficient autonomy to make decisions (Roberts, 1990; Sprinkle, 2003). When combined, controlling and enabling uses of a PM system create dynamic tension (Simons, 1994) which harvests unique organizational capabilities and competitive advantages through organizational dialogue, mutual understanding and better direction (Henri, 2006a). An organization’s inability to balance different uses of PM systems is associated with slower decision making, wasted resources, instability and, ultimately, lower performance (Bisbe et al., 2007; Henri, 2006a). Our theoretical model investigates whether the joint use of PM systems for diagnostic and interactive purposes creates dynamic tension that reflects complementarities (focusing on intended and emergent strategies). According to Henri (2006a, p. 533), “. . . the diagnostic use of PM systems may represent a mechanistic control tool used to track, review, draw attention, and support the achievement of predictable goals. . .” via the provision of information. On the other hand, “. . .interactive use may be considered as an organic control system supporting the emergence of communication processes and the mutual adjustment of organizational actors. . .” which are used to reduce equivocality and stimulate change. Also, diagnostic use of a PM system can be seen as a pure measurement instrument, while interactive PM system use can elevate its role to a strategic management tool (Kaplan & Norton, 2001). Henri’s (2006a) empirical work concludes that there is evidence to suggest that using the two levers of control generates tension which then positively affects performance. Henri tested simple effects and modelled tension as an interaction effect but Henri’s operationalization of interactive use does not adhere to Simons (1994) description. Henri (2006a) operationalizes interactive use via measures of focusing attention while Simons (1994) states that interactive use is described by nuances such as top management commitment and engagement. In contrast to Henri, Widener (2007) operationalized diagnostic use via measures of monitoring and focusing attention while she operationalized interactive use via measures of top management engagement. Widener also suggests that the two levers are complementary and based on empirical evidence claims that the full benefits from a PM system can only be derived when they are used from both diagnostic and interactive perspectives. Widener (2007) however neither specified nor tested interactions between the two systems. 2.4. Hypotheses: PM system USE→ ORGANIZATIONAL CAPABILITIES While much of the PM systems literature posits direct effects on performance, this perspective has been challenged and empirical evidence is scarce. In essence, recent literature supports the notion that diagnostic and interactive uses of PM systems advance organizational capabilities which subsequently help the organization meet its targets. Organizational capabilities “refer to a firm’s capacity to deploy Resources, usually in combination, using organizational processes, to effect a desired end. . ..Capabilities are based on developing, carrying, and exchanging information through the firm’s human capital” (Amit & Schoemaker, 1993, p. 35). The PM system may affect organizational capabilities by increasing communication between employees or encouraging employees to analyze how resources should be combined to solve problems. A PM system can encourage improvements across several capabilities such as in the areas of strategic management, operations, and stakeholder relationships. Hence, varied PM system usage (i.e., diagnostic and interactive uses) may have an effect on firm performance via its impact on organizational capabilities and meeting organizational targets. Effective strategic management involves necessarily a variety of critical orchestrating activities. Strategic management

318

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

encompasses activities such as the prioritization of strategic actions and objectives, obtaining information and feedback regarding the existing strategy and forthcoming strategy, the alignment of organizational processes with the intended strategy, and the allocation of budgets and capital expenditures among others (Porter, 1991; Simons, 1991). These activities rely on feedback obtained from diagnostic systems and direction from top management managers who interactively obtain information, and discuss, challenge, and debate courses of action with other organizational members across levels. The information obtained from diagnostic and interactive PM system uses can help organizations reduce uncertainty and ease equivocality, enhancing strategic management capability. Without information derived from performance systems, organizations cannot assess their current performance vis-à-vis their intended strategy and may have a difficult time charting a new course of action or direction without knowing the facts and without significant debate that can emerge from interactive use. The use of both systems is necessary as the organization has to balance the needs for control and renewal. We expect that PM system uses will have a positive effect on strategic management capability but we also posit that the interaction between the two types of uses will explain additional variation, above and beyond what the individual uses may contribute. – H1a : Diagnostic Use will have a positive impact on Strategic Management Capability. – H1b : Interactive Use will have a positive impact on Strategic Management Capability. – H1c :The Dynamic Tension between Diagnostic Use and Interactive Use will have a positive impact on Strategic Management Capability. Organizations also rely on their operational capabilities to garner advantage against competitors. Competitive advantage can emerge for instance from more productive, innovative, and integrated processes, and higher employee productivity. To enhance such processes, however, the organization relies on information furnished by both diagnostic and interactive systems and the tension created when both systems are used concurrently. For instance, information regarding productivity or quality has to be collected and shared in order for organizational constituents to take corrective measures when necessary or for promotion of processes that prove to be exemplary. Also, in order to integrate processes and orchestrate resources, organizations need to discuss the value proposition, identify costs and benefits, and set direction, all of which need information and deliberation across organizational members along with top management involvement. We expect that diagnostic and interactive PM system uses along with the interaction between the two uses will have a boosting effect on operational capabilities. H2a :Diagnostic Use will have a positive impact on Operational Capability. H2b : Interactive Use will have a positive impact on Operational Capability. H2c : The Dynamic Tension between Diagnostic Use and Interactive Use will have a positive impact on Operational Capability. Organizations rely on external stakeholders for business, affirmation, and resources (Freeman, 1984; Donaldson & Preston, 1995, Agle et al., 1999; Berman et al., 1999). Relationships with such stakeholders depend however on information. For instance, relationships with customers depend on timely and accurate information generated via performance monitoring systems. Customers can demand information regarding quality defects or costs. Similarly, suppliers seek feedback information furnished by monitoring

systems concerning their performance. Challenges and opportunities regarding customers, suppliers, and regulators require debate and guidance by top management, which can interactively take an interest to generate solutions and resolve equivocality. However, when the two systems are used together, there is even greater potential to improve external stakeholder relations because the two systems can tackle uncertainty and equivocality only when used jointly. Thus, we hypothesize that the two systems and their interaction will have a positive effect on external stakeholder relations. H3a : Diagnostic Use will have a positive impact on External Stakeholder Relations Capability. H3b : Interactive Use will have a positive impact on External Stakeholder Relations Capability. H3c : The Dynamic Tension between Diagnostic Use and Interactive Use will have a positive impact on External Stakeholder Relations Capability. 2.5. Hypotheses: ORGANIZATIONAL CAPABILITIES→TARGET PERFORMANCE Organizations capable of effective strategic management can direct resources towards the attainment of organizational targets. First, managing feedback regarding existing strategy can help the organization align its operations with the respective strategy and thus properly equip the organization to meet its goals. Second, strategic management efforts can help the organization decide on its strategic objectives and allocate resources to achieve those objectives. In this form, strategic management is a protagonist but it also plays an enabling role. Effective strategic management should impact the attainment of organizational goals positively. H4 : Strategic Management Capability has a positive effect on Target Performance. Organizations depend on their processes to add value and derive healthy returns. These processes have to bolster the organizational operational capabilities which are responsible to propel the organization to meet or exceed its targets. Operational capabilities can be reflected by a variety of nuances such as enhanced productive processes, improved employee performance, integrated processes, and innovation in working processes (Simons, 1991). For example, improved employee performance, more integrated processes, and more innovation in working processes can help customers derive better value from the relationship with the firm and thus the organization can reap such benefits to meet its targets regarding customer relations. Similarly, delivery performance can be enhanced through operational capabilities and thus the organization can subsequently meet its operational performance targets. Consequently, we expect that higher levels of operational capabilities will enable the organization to meet its targets more effectively. H5 : Operational Capability has a positive effect on Target Performance. Organizations depend on external stakeholders for business opportunities, support, resources, acknowledgement, validation, and rulings (Donaldson & Preston, 1995; Agle et al., 1999; Berman et al., 1999). External stakeholders include customers, suppliers, regulators or other governmental institutions as well as the market at large. Customers for instance provide business opportunities while suppliers are assuming an ever growing importance in assuring the firm’s value chain. Organizations can improve profitability by increasing revenues and/or cutting costs. Improving relationships with customers can potentially increase revenues while working closer with suppliers can trim costs. Organizations cannot ignore external stakeholders because they can help the organization meet or exceed its targets (Berman et al., 1999; Margolis & Walsh, 2003).

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

H6 : External Stakeholder Relations Capability has a positive effect on Target Performance. 2.6. Hypotheses: MEETING TARGET PERFORMANCE→FIRM PERFORMANCE When organizations meet their customer relations targets, this generally manifests an ability to keep customers happy and improve market share. Ultimately, this translates into higher returns in terms of sales growth and profit growth, among other financial measures. Similarly, when organizations meet operational performance targets, they enjoy productivity improvements or shorter lead times which then lead to better returns on investment and higher profit to sales ratios. We expect that these effects will be robust across both subjective as well as objective financial ratios. We thus hypothesize that firms that meet their performance targets will demonstrate better performance. H7 : Target Performance will have a positive effect on firm performance. 2.7. Hypotheses: LONGITUDINAL EFFECTS PM systems are expensive to implement and maintain. The premise, however, is that the use of a PM system generates positive financial returns over time. PM systems furnish information and engender more effective resource orchestration which is critical for the organization’s current well-being and future prosperity. Organizations obtain and process information for operational, relational, and strategic use. We expect that firms that use a PM system would outperform firms that do not deploy a PM system. They should have better returns on assets and enjoy higher earnings. From a static perspective, the performance of the firm can be assessed by examining financial ratios before and after implementation. From a dynamic view, performance improves over time as organizations learn to adapt to a new system. The dynamic perspective acknowledges that time plays a vital role. H8 : PM system use will have a positive effect on firm financial performance. 3. Cross sectional study Our first study can be described as cross sectional in nature. We illustrate below the operationalization of constructs, research design and research methods, and provide results based on subjective and objective data obtained for 2010. Objective data for both the cross-sectional and longitudinal analyses were obtained from Aida - Bureau van Dijk (https://aida.bvdinfo.com/), a source of financial performance data and other information pertaining to firms operating in Italy and many other countries. 3.1. Operationalization of constructs In order to operationalize the variables, the researchers sought to identify existing scales that can be employed or scales that can be adapted to the specific context. Where existing scales were not identified, the researchers relied on the extant literature to specify the content domain of each construct. The review also included an investigation of contextual and organizational factors that may affect performance. Contextual factors may involve primary industry affiliation, firm size, and geographic ownership diversification while organizational factors may embrace measures of the resources the firm possesses. To assure that the questions could be correctly understood by respondents and easily answered by them, the initial survey questionnaire was carefully pre-tested. Thus, a preliminary draft of the

319

questionnaire was discussed with academic scholars to assess clarity, simplicity, and content validity. Afterwards, a pilot study was also conducted with a group of accounting managers and controllers of six large organizations, where their inputs were used to improve the clarity, comprehensiveness, and relevance of the survey questions. Finally, the emerged survey was reviewed once more by a panel of three academics. The final survey items are discussed below and appear in Table 2. Diagnostic use was specified as a second-order construct and was operationalized via three first-order variables which include monitoring, focusing attention, and legitimization. All three rely on measures developed by Henri (2006a,b) which he adapted from Vandenbosch (1999). Monitoring is reflected via four manifest variables. Focusing Attention is operationalized by seven observed variables. Legitimization includes nine observed items. Interactive use relies for content on the work of Widener (2007). Strategic Management Capability is a new scale and consists of six measures that address whether the PM system in the organization supports strategic management activities. These measures draw on the work of Simons (1991) and his work on management control systems. Operational Capability is also a new scale and was captured via five indicators addressing productivity, employee performance, innovation of working processes, and development of integrated solutions. External Stakeholder Relations Capability is a multi-item scale and relies on the work of Cousins et al. (2008) and Mahama (2006). We operationalized Target Performance via a five item scale which drew items from Ittner et al. (2003b). It addresses the extent to which the organization meets the performance targets across a variety of categories. Finally, Subjective Financial Performance is measured using four financial indicators relative to industry average. 3.2. Research design The sample population consisted of 1000 organizations operating in Italy and randomly selected from Amadeus - Bureau Van Dijk (a database of public and private firms which include Italian firms as well as multinational firms filing separately for their Italian operations). An appropriate sample of organizations covering different industry sectors and organizational characteristics has been selected to maximize the generalizability of the findings. We contacted each firm’s management directly to select a list of companies prepared to cooperate with the research. We then identified the key figures in each company who would be competent to fill in the survey. Names and e-mail addresses of targeted respondents were obtained during telephone contacts. The research design enabled the researchers to target respondents that possess substantive organizational level knowledge and specific knowledge as it pertains to PM systems. We targeted high level executives (i.e., CEOs, CFOs) as well as controllers and managers (e.g., managing directors, operations managers). Given the positional levels of these individuals, we are confident that the target population can serve as excellent respondents both in reference to explanatory variables as well as performance variables. The research has been carried out through a survey questionnaire using fax and e-mail during 2010. To increase the survey response rate, we pre-notified specific respondents by phone to seek their input regarding their availability to work on the survey. This approach probably led to more involvement and commitment by the respondents from the beginning of the project (Baldauf et al., 1999). The survey instrument was preceded by an introductory letter clarifying the purposes and objectives of the entire research project. Managers were also promised an overall PM system benchmark study to compare their responses to those of other participating organizations. Respondents were alerted to respond to the survey on three subsequent occasions.

320

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Table 1 Demographics. Respondent’s job title CEO CFO Controller Managing Director Operations Manager HR Manager Finance Manager Others

Frequency

Percentage

Cumulative percent

19 57 185 34 58 17 6 10

4.90 14.80 47.90 8.80 15.00 4.40 1.60 2.60

4.90 19.70 67.60 76.40 91.40 95.80 97.40 100.00

246 140

63.70 36.30

63.70 100.00

202 184

52.30 47.70

52.30 100.00

221 69 18 13 25 28 12

57.30 17.90 4.70 3.40 6.50 7.30 3.10

57.30 75.10 79.80 83.20 89.60 96.90 100.00

12 22 51 59 61 53 120 8

3.20 5.80 13.50 15.60 16.10 14.10 30.00 2.10

3.20 9.00 22.50 38.10 54.20 68.30 100.00

Industry Manufacturing Service Geographic Diversification Italian Multinational Employee Size n < 500 500 ≤ n ≤ 999 1000 ≤ n ≤ 1499 1500 ≤ n ≤ ≤1999 2000 ≤ n ≤ 2999 3000 ≤ n ≤ 9999 n ≥ 10000 Organizational Age Age ≤ 5 6 ≤ Age ≤ 10 11 ≤ Age ≤ 20 21 ≤ Age ≤ 30 31 ≤ Age ≤ 40 41 ≤ Age ≤ 50 Age ≥ 51 Not reported

To address the research questions and hypotheses, we obtained useful responses from a cross-section of 386 firms operating in Italy, which represents close to a 39% effective response rate. We note that we obtained surveys from a total of 401 firms but responses from 15 firms were discarded due to excessive missing data. There were very rare instances of missing values for the retained firms and those appeared to be missing at random; missing values were mean replaced. Demographics appear in Table 1. Roughly 20% of the respondents are C-level executives while about 48% of respondents are controllers. Operations managers and managing directors accounted for another 23% of participants. As far as firm size is concerned, small, medium, and large organizations are represented and reflect general industrial demographics in Italy. Around 57.30% of the firms are considered small with employment levels below 500 while 17% of the firms have a size greater than 2,000. About 64% of the sample represents manufacturing firms and 36% of the responding firms are service oriented. There is almost equal representation between pure Italian firms (52%) vis-à-vis multinationals (48%). The firms tend to be older with relatively few firms (9%) dating less than 10 years old. Overall, the demographics suggest a broad representation of firms operating in Italy. In order to identify whether the responding firms differ from the population of firms operating in Italy, we randomly selected a blend of another 581 firms operating in Italy which shared a similar proportion of manufacturing and service firms as our sample, as well as domestic and multinational firms operating in Italy, and compared the two groups across many dimensions. Based on t-tests (a = .05), the analysis suggests no differences in terms of employee size, sales, profitability, and total assets. Thus, there is no evidence

to suggest that the sample is biased against the population of firms operating in Italy. Our research design included a variety of statistical and separation approaches (Craighead et al., 2011) to minimize the risk of common method variance (CMV). In terms of statistical remedies, Harman’s single factor test using CFA failed to support a single factor. We also used a marker variable technique (Malhotra et al., 2006). According to Craighead et al. (2011), the marker variable technique calls for the inclusion of an additional variable in the study which is theoretically unrelated to at least one other variable of interest (Harrison et al., 1996). Specifically, the marker variable (i.e., My company has an operating top management philosophy of a growth strategy primarily through external financing [borrowing, capital issues, etc.]; rated on a seven point scale where 1 = to a very great extent and 7 = not at all) exhibited low correlations with all focal variables: Diagnostic Use (−.008), Interactive Use (.081), Strategic Management Capability (.010), Operational Capability (.033), External Stakeholder Relations Capability (.094) Target Performance (−.105), and Subjective Financial Performance (−.094). We also lessened the risk of CMV via methodological separation (Craighead et al., 2011). The independent and dependent perceptual variables were measured using different scales and different formats. Subjective financial performance was operationalized using a different scale than target performance and all other variables in the model. Importantly, the perceptual data set is relying on a cross-sectional survey while objective financial performance was obtained from secondary sources for a ten year period, including time periods both before and after the collection of perceptual data. Given the applied remedies, CMV is not expected to impact our specified relationships. 3.3. Research methods and results To address the measurement model through confirmatory factor analysis, we used a covariance matrix as input and tested it via Mplus (Table 2). The measurement model includes first-order factors as well as a second-order factor which operationalizes Diagnostic Use. To ascertain the efficacy of the second-order model for Diagnostic Use, we examined four models within the context of the entire measurement model, relying on the guidance of Koufteros et al. (2009). Details about the methods of examining for the second-order model specification appear in Appendix A. The overall CFA model, which includes the second-order specification for Diagnostic Use, has acceptable model fit as indicated by the fit statistics (2 = 1985.576, df = 1090, 2 /df = 1.82, CFI = 0.93, NNFI = 0.92, RMSEA = 0.046, and 90% CI of RMSEA = (.043, .049). All item-factor loadings are greater than 0.50, and are statistically significant at the 0.01 level. We assessed discriminant validity by comparing the squared correlation between each pair of firstorder constructs and their respective average variance extracted (AVE) (Table 2) (Koufteros, 1999). A squared correlation greater than individual AVEs provides evidence to suggest the absence of discriminant validity. The highest squared correlation, which can be derived from Table 3, is 0.40 and it is significantly lower than the respective AVEs of the two specific first-order constructs, i.e., Strategic Management Capability (AVE = 0.57) and Operational Capability (AVE = 0.63). The composite reliabilities (Table 2) for the focal constructs are greater than the threshold of 0.70. Overall, the constructs appear to be reliable and valid. We subsequently specified a structural model to partially examine the hypotheses advanced earlier. Before testing the structural model, however, we examined the distribution of each variable via measures of kurtosis and skewness, along with visual inspections. Each variable appeared to have an approximately normal distribution. Also, the scores for Diagnostic Use and Interactive Use were mean-centered before creating the interaction term which

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

321

Table 2 Measurement model and descriptive statistics. Latent Variables and Respective Manifest Items

Mean/item

Standard Deviation

Standardized Coefficient

T-value

Response format for the following three constructs: Using the scale where 1 = To a very great extent, 2 = To a great extent, 3 = To some extent, 4 = To a little extent, 5 = To a very little extent, and 6 = Not at all, please rate the extent to which your top management team currently uses performance measures to: Monitoring, Composite Reliability (CR) = .82, AVE = .54 Track progress towards goals Review key measures Monitor results Compare outcomes to expectations

2.16 2.73 1.83 1.96

.923 1.007 .793 .888

.86 .72 .68 .67

39.39 24.69 21.48 20.57

3.03 2.86 2.44 2.98 2.76 2.92 2.55

1.002 .98 .971 1,088 1.030 1.018 1.016

.75 .80 .78 .67 .75 .66 .74

29.69 35.57 33.20 21.40 29.83 19.94 28.08

2.67 2.72 2.72 2.81 2.57 2.78 2.51 2.72 2.92

.928 .989 .901 .920 .866 .936 .978 .953 .970

.71 .57 .63 .75 .80 .76 .69 .69 .70

24.65 15.25 18.58 29.63 37.54 30.76 23.56 23.28 24.01

Focusing Attention, CR = .89, AVE = .54 Integrate the organisation- i.e. tie the organisation together Enable the organisation to focus on common issues Enable the organisation to focus on your critical success factors Develop a common vocabulary in the organisation Provide a common view of the organisation Enable discussion in meetings of superiors, subordinates and peers Enable continual challenge and debate underlying results, assumptions and action plans Legitimization, CR = .90, AVE = .50 Increase your understanding of the business Justify decisions Verify assumptions Maintain your perspectives Support your actions Reinforce your beliefs Stay close to the business Increase your focus Validate your point of view

Response format for the following construct: In my organisation, senior managers have strong commitment to. . .Use the scale where 1 = To a very great extent, 2 = To a great extent, 3 = To some extent, 4 = To a little extent, 5= To a very little extent, and 6 = Not at all. Interactive Use, CR = .90, AVE = .64 Reviewing organisational performance based on our performance measurement systems Using performance measures to analyse root problems Using performance measures to achieve targets Using performance measures to make decisions Using performance measures to get information to support decision making

2.81 2.50 2.31 2.40 2.40

.977 .915 .871 .878 .865

.74 .87 .76 .82 .82

27.91 51.89 30.48 38.87 38.97

Response format for the following three constructs: The performance measurement systems in my organisation . . .Use the scale where 1 = To a very great extent, 2 = To a great extent, 3 = To some extent, 4 = To a little extent, 5 = To a very little extent, and 6 = Not at all. Strategic Management Capability, CR = .89, AVE = .57 Support the achievement of key strategic objectives Improve the prioritisation of actions, projects and objectives Give feedback on the company strategy & its strategic direction Give feedback on operational processes Improve the alignment of strategy and operations Enhance negotiation of capital expenditure, budget allocation and financial support to projects

2.36 2.45 2.48 2.61 2.69 2.80

.861 .911 .889 .906 .906 .940

.75 .80 .77 .72 .84 .62

30.01 37.10 32.09 26.49 45.09 17.97

2.91 2.89 2.66 2.76 2.97

.880 .907 .838 .816 .876

.85 .79 .84 .75 .73

45.78 34.91 45.20 27.15 27.84

2.95 3.17 2.80 3.32

.972 1.028 1.036 1.190

.80 .72 .71 .50

27.92 20.99 20.87 11.27

Operational Capability, CR = .89, AVE = .63 Increase the innovation of working practices Enhance the development of integrated solutions Promote operational improvements Increase productivity Improve employee performance in their operations External Stakeholder Relations Capability, CR = .78, AVE = .49 Improve the overall company’s leadership in the market Improve our relationship with suppliers Improve our relationship with customers Improve our relationship with regulators or government institutions

Response format for the following construct: To what extent does your organisation meet the targets set for the following categories of measures: Use the scale where 1 = To a very great extent, 2 = To a great extent, 3 = To some extent, 4 = To a little extent, 5 = To a very little extent, and 6 = Not at all. Target Performance, CR = .80, AVE = .49 Short term financial results – e.g. operating income, sales growth, etc. Customer relations- e.g. market share, customer satisfaction, etc. Employee relations- e.g. employee satisfaction, safety, etc. Operational performance – e.g. productivity, lead times, etc. Environmental performance – environmental compliances, etc.

2.25 2.59 3.06 2.41 2.94

.870 1.016 1.038 .911 1.276

.50 .69 .75 .71 .67

10.94 21.34 25.38 22.35 19.37

322

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Table 2 (Continued). Latent Variables and Respective Manifest Items

Mean/item

Standard Deviation

Standardized Coefficient

T-value

Response format for the following construct: In comparison with the industry average, how would you rate the performance of your company over the last two years in terms of the following indicators? Use the scale where: 1: Well Above Average, 2: Above Average, 3: Short of Above Average, 4: Average, 5: Short of Below Average, 6: Below Average, 7: Well Below Average Subjective Financial Performance, CR = .93, AVE = .77 Rate of sales growth Rate of profit growth Return on investment (ROI) Profit/sales ratio

3.11 3.28 3.35 3.30

1.257 1.358 1.289 1.266

.78 .91 .90 .92

35.03 75.31 72.86 87.89

.82 .87 .85

31.67 33.22 35.96

Diagnostic Use-Second Order Factor, CR = .88, AVE = .72 Monitoring Focus Attention Legitimization Fit Indices: Chi-Square (df) = 1985.576 (1090), Chi-Square/df = 1.82, NNFI = .92, CFI = .93, RMSEA = .046, 90% CI RMSEA = (.043,.049)

operationalizes Dynamic Tension to mitigate potential multicollinearity. The structural model fit was evaluated using Mplus via several criteria and structural paths were examined for statistical significance based on t-tests and respective p-values. Correlations with objective financial performance measures appear negative due to the specific scaling (see Table 2) of all perceptual measures. Based on ROT and RBT, we control for the possession of a variety of both intangible and tangible resources. This was necessitous in order to isolate the unique contributions of asset orchestration in contrast to asset possession. The intangible assets we control for include Research & Development Expenditures and Industrial Patents & Intellectual Property Rights while the tangible assets encompass Plant & Machinery and Industrial & Commercial Equipment. We also control for contextual variables such as firm size (number of employees in current year), industry (manufacturing versus service) and whether the firm is multinational. Table 4 presents standardized coefficients along with respective significance levels and t-values. The fit indices indicate that the structural model exhibits a good model-to-data fit (2 = 2515.635, df = 1666, 2 /df = 1.509, CFI = 0.91, NNFI = 0.90, RMSEA = 0.043, 90% CI of RMSEA = [0.039, 0.046]). H1a and H1b predict respectively that diagnostic use and interactive use have a positive effect on strategic management capability. There is ample statistical evidence to support the hypotheses (p < .000 and p < .000 respectively) although the effect size for Diagnostic Use ( = 0.625) is significantly larger than the effect size of Interactive Use ( = 0.246). Furthermore, H1c suggests that strategic management capability is subject to the interaction between diagnostic use and interactive use, termed as Dynamic Tension. Dynamic Tension did produce a positive relationship ( = 0.084) with Strategic Management Capability though the effect was statistically moderate (p < .05). Similarly, H2a , H2b , and H2c posit respective positive effects on operational capability. All three standardized effects were positive but clearly the effect of diagnostic use (as opposed to interactive use and dynamic tension) is more prominent from a substantive ( = 0.577 vs  = 0.233 vs  = 0.113) and statistical perspective (p < .000 vs p < .002 vs p < .010). As far as external stakeholder relations capability is concerned, diagnostic use (H3a ) and interactive use (H3b ) play a significant role. Diagnostic Use and Interactive Use have statistically significant positive effects ( = 0.535, p < .000 and  = 0.320, p < .000 respectively) on External Stakeholder Relations Capability while the impact of Dynamic Tension (H3c ) was negative, albeit minimal ( = −0.032, p < .280). Collectively, all three explanatory variables contribute towards organizational capabilities. While we expected that to be the case, we were surprised that diagnostic use had such a strong positive showing given Henri’s (2006a) conceptual and empirically based

designation of diagnostic use as a “negative force.” We note however that our operationalization of diagnostic use and our choices for capabilities differ and thus direct comparisons may not be fruitful. Henri operationalized diagnostic use only via measures of monitoring, which may be perceived as surveillance instruments that can be used merely to exercise managerial control. Instead, when monitoring is coupled with focusing attention and legitimization, employees can more clearly see the purpose of diagnostic use and verify that prior actions were credible, commanding higher levels of buy-in. We found that diagnostic use is a mechanism organizations can rely on to improve a wide range of capabilities. Organizations strive to meet their target performance using several means. The data analysis here illustrates that these capabilities can have a statistically significant and positive effect on the ability of the organization to meet its target performance. Interestingly, the effect of external stakeholder relations capability was more pronounced ( = 0.431, p < .000) than the effects of strategic management capability ( = 0.174, p < .024) and operational capability ( = 0.163, p < .027). This was surprising as we expected that operational capability, being more proximal to performance, would probably be the most salient factor. However, we found some corroborating theoretical and empirical evidence in the literature that can perhaps explain the prominence of stakeholder relations capability as an explanatory variable. For instance, Sarkis et al. (2010) underscore the importance of stakeholders and according to Franco et al. (2012) organizations are under a relentless pressure to deliver value not only to shareholders but increasingly to other stakeholders. The first consequence is perhaps the inclusion of specific organizational performance targets which are salient to stakeholders while the second consequence is the development of a top leadership compensation structure which is tied to stakeholder relations/corporate social performance. The stakeholder theory (Donaldson & Preston, 1995) is considered to be rather instrumental for performance. As Donaldson and Preston (1995) articulate, “. . .the principal focus of interest here has been the proposition that corporations practicing stakeholder management will, other things being equal, be relatively successful in conventional performance terms.” (p. 66–67). Berman et al. (1999) present a strategic stakeholder management model under which “. . .the nature and extent of managerial concern for a stakeholder group is viewed as determined solely by the perceived ability of such concern to improve firm financial performance” (p. 488). Berman et al. (1999) empirically demonstrated that after controlling for the strategy and operational environment, stakeholder efforts are positively related to performance. A review of the literature that examined the relationship between corporate

Table 3 Correlation matrix. 1

2

3

4

5

6

7

8

1. Monitoring 2. Focus Attention 3. Legitimization 4. Diagnostic Use1 5. Interactive Use 6. Dynamic Tension2 7. Strg. Mgmt Cap. 8. Operational Cap. 9. External Stakehol. Relations Cap. 10. Target Perf. 11. Subjective Financial Perf. 12. ROTA3 13. ROA4 14. ROE5 15. EBITM6 16. Employee Size7 17. Industry 18. Geographic Div. 19. R&DE8 20. IP&IPR9 21. P&M10 22. ICE11

1 .633** .604** .796** .525** .286** .599** .448**

9

10

1 .630** .883** .477** .318** .554** .554**

1 .897** .513** .387** .583** .511**

1 .575** .391** .660** .590**

1 .408** .644** .616**

1 .404** .375**

1 .635**

1

.374** .463**

.468** .465**

.410** .485**

.487** .543**

.522** .448**

.219** .257**

.473** .458**

.578** .492**

1 .453**

1

.216** −.113* −.107* −.081 −.135** −.048 −.068 −.220** .008 .019 .093 −.107

.225** −.088 −.080 −.066 −.082 −.099 −.045 −.117* −.021 −.053 .104 −.139*

.191** −.102* −.094 −.101* −.121* −.025 −.056 −.185** .024 −.016 .224 −.096

.239** −.113* −.105* −.096 −.125* −.066 −.062 −.191** .004 −.026 .095 −.131*

.241** −.073 −.069 −.085 −.114* −.066 −.074 −.156** −.036 −.020 .747 −.115*

.101* −.036 −.034 −.128* −.048 −.039 −.120* −.057 −.017 −.028 .779 −.011

.273** −.048 −.040 −.035 −.066 −.098 −.027 −.101* −.004 −.005 −.104 −.102

.246** −.070 −.058 −.058 −.083 −.021 −.081 −.117* −.047 −.034 −.008 −.132*

.247** −.002 .002 −.074 .006 −.063 −.041 −.131** −.084 −.055 −.013 −.106

.360** −.124* −.116* −.178** −.130* −.083 −.099 −.262** −.029 .002 −.078 −.103

2

11

12

13

14

15

16

17

18

19

20

21

22

1 −.094 −.085 −.168** −.109* −.111* −.021 −.054 .009 .003 −.127* −.051

1 .994** .619** .962** −.130* .233** .228** .009 .045 .048 .033

1 .618** .957** −.128* .229** .222** .008 .037 .048 .015

1 .588** −.041 .217** .144** .020 .035 .048 −.031

1 −.123* .213** .227** .011 .180** .430** .202**

1 −.145** .155** .185** .256** .255** .467**

1 .181** .067 −.109* −.134* .001

1 .071 −.034 .043 .134*

1 .020 −.011 .297**

1 .153** .566**

1 .377**

1

Diagnostic Use is a second-order Construct that includes Monitoring, Focus Attention, & Legitimization, Dynamic Tension= Diagnostic Use * Interactive Use, ROTA = Return on Total Assets = EBIT/Total Net Assets where EBIT= Revenue–COGS - Operating Expenses - Depreciation & Amortization ROA = Return on Assets = Net Income/Total Assets 5 ROE = Return on Equity = Net Income/Shareholders Equity 6 EBITM = Earnings before Interest and Taxes Margin = Last Four Quarters of Operating Earnings/Last Four Quarters of Sales 7 Employee size was measured on a scale where: 1: n < 500, 2: 500 < n < 999, 3: 999 < n < 1499, 4: 1500 < n < 1999, 5: 2000 < n < 2999, 6: 3000 < n < 9999, 7: n ≥ 10,000 8 Research & Development Expenditures (R&DE - in Euros): Investments in Research & Development (Based on the Balance Sheet). Count as Intangible Assets. 9 Industrial Patents and Intellectual Property Rights (IP&IPR - in Euros): Count as Intangible Assets. 10 Plant and Machinery (P&M - in Euros): Investments in Tangible Assets. 11 Industrial and Commercial Equipment (ICE - in Euros): Investments in Tangible Assets. 3 4

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

1

Variable

323

324

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Table 4 Hypotheses testing. Hypothesis

Effect

Standardized Coefficient

t-Value

Significance

H1a H1b H1c H2a H2b H2c H3a H3b H3c H4 H5 H6

Diagnostic Use→ Strategic Management Capability Interactive Use→ Strategic Management Capability Dynamic Tension→ Strategic Management Capability Diagnostic Use→ Operational Capability Interactive Use→ Operational Capability Dynamic Tension→ Operational Capability Diagnostic Use→ External Stakeholder Capability Interactive Use→ External Stakeholder Capability Dynamic Tension→ External Stakeholder Capability Strategic Management Capability → Target Performance Operational Capability → Target Performance External Stakeholder Capability → Target Performance Target Performance→ Subjective Financial Performance Target Performance→ROTA Target Performance→ROA Target Performance→ROE Target Performance→EBITM Firm Size → Subj. Fin. Perf. Industry → Subj. Fin. Perf. Geog. Diversification → Subj. Fin. Perf. R&D Expenditures → Subj. Fin. Perf. Industrial Patents & Intell. Prop. Rights → Subj. Fin. Perf. Plant & Machinery → Subj. Fin. Perf. Industrial & Commercial Equipment → Subj. Fin. Perf. Firm Size → ROTA Industry → ROTA Geog. Diversification → ROTA R&D Expenditures → ROTA Industrial Patents & Intell. Prop. Rights → ROTA Plant & Machinery → ROTA Industrial & Commercial Equipment → ROTA Firm Size → ROA Industry → ROA Geog. Diversification → ROA R&D Expenditures → ROA Industrial Patents & Intell. Prop. Rights → ROA Plant & Machinery → ROA Industrial & Commercial Equipment → ROA Firm Size → ROE Industry → ROE Geog. Diversification → ROE R&D Expenditures → ROE Industrial Patents & Intell. Prop. Rights → ROE Plant & Machinery → ROE Industrial & Commercial Equipment → ROE Firm Size → EBITM Industry → EBITM Geog. Diversification → EBITM R&D Expenditures → EBITM Industrial Patents & Intell. Prop. Rights → EBITM Plant & Machinery → EBITM Industrial & Commercial Equipment → EBITM

.625 .246 .084 .577 .233 .113 .535 .320 −.032 .174 .163 .431 .414 −.199 −.169 −.225 −.115 −.090 −.010 .070 .026 −.031 −.102 .053 .088 .026 .007 .005 −.011 .054 −.080 .153 .001 −.016 .019 −.038 .073 −.127 .058 .093 −.096 .059 −.009 .105 −.147 .070 −.107 −.045 .025 −.097 .492 −.015

8.878 3.224 1.712 7.308 2.820 2.324 6.200 3.578 −.587 1.974 1.922 4.156 6.952 −3.035 −2.584 −3.526 −1.954 −1.345 −.174 1.152 .418 −.447 −1.393 .686 1.272 .435 .110 .084 -.149 .712 −1.020 2.232 .018 −.258 .291 -.531 .976 −1.623 .846 1.581 −1.563 .927 −.128 1.422 −1.904 1.141 −2.015 −.809 .429 −1.530 7.943 −.222

.000 .000 .043 .000 .002 .010 .000 .000 .280 .024 .027 .000 .000 .001 .005 .000 .025 .090 .431 .125 .338 .328 .082 .243 .102 .332 .456 .465 .441 .438 .154 .013 .493 .398 .355 .298 .165 .052 .198 .057 .059 .354 .449 .078 .028 .127 .022 .209 .334 .063 .000 .412

H7

Controls

Fit Indices: Chi-Square (df) = 2515.635 (1666), Chi-Square/df = 1.509, NNFI = .91, CFI = .90, RMSEA = .043, 90% CI RMSEA = (.039,.046). Statistically significant effects are noted in italics.

social performance and corporate financial performance over 1972-2002 identified 109 empirical studies, out of which 54 exhibited a positive association while only 7 suggest a negative association (Margolis & Walsh, 2003). In the context of PM systems, Cousins et al. (2008) and Mahama (2006) empirically found that PM systems enhance perceived interfirm financial and nonfinancial performance indirectly by first improving cooperation and socialization among firms. Mahama’s (2006) research suggests that PM systems help to ensure that performance information is distributed fairly among participants in the supply chain relationship, which promotes learning and problem solving. Furthermore, the sharing of information helps to align the interests amongst relationship constituents, motivating them to adapt to changes without the need for coercive measures. Cousins et al. (2008) demonstrate that the use of PM systems enhances communication, which improves socialization, an

essential element that can facilitate orchestration of resources across organizational boundaries. Given the perceived contributions that relations with external stakeholders can beget, companies have incentivised their top leadership in order to improve relations with stakeholders (Deckop et al., 2006). Towards this end, Deckop et al. (2006) suggest that CEO pay structure can be used as an incentive to promote better relations with stakeholders. They found that CEO long-term pay focus did have an impact on corporate social performance. We conjecture that given the incentive regime where top leadership is rewarded based on relations with stakeholders, the organization will be more likely to measure performance against respective targets and then direct resources toward the improvement of stakeholder relations, which subsequently fuels better performance. Considering that this is a relatively nascent topic, we expect that there is significant variance on how organizations use their resources to improve

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Diagnosc Use

325 Subjecve Financial Performance

Strategic Management Capability

ROTA

Dynamic Tension (Diagnosc X Interacve)

Operaonal Capability

Target Performance

ROA

ROE

Interacve Use

External Stakeholder Relaons Capability EBIT

P - Values p = 59.01 (N = 82). Using Factorial ANOVA, the main and interaction effects were examined using p-values and partial 2 (analogous to R2 , reflecting explanatory strength). Fig. 3a and b display three lines representing Low/ Moderate/High Interactive use where Diagnostic Use is represented on the X-axis. The Y-axes represent Strategic Management Capability and Operational Capability respectively. The graphs can be interpreted in two ways: (1) given a particular level of Diagnostic Use, do the Strategic Management Capability (or the Operational Capability) means differ depending on the level of Interactive Use? The answers to these types of questions can be visualized in the graph by choosing a level of Diagnostic Use (e.g., low on the x-axis) and comparing the three Interactive Use means graphed at that level of Diagnostic Use. (2) The second way to interpret the results answers the question: given a particular level of Interactive Use, do the Strategic Management Capability or the Operational Capability means differ depending on the level of Diagnostic Use? For example, given low Interactive Use, do the Strategic Management Capability means differ depending on whether Diagnostic Use is low vs. moderate? The answers to these types of questions can be visualized in the graph by choosing a level of Interactive Use (ex., the low Interactive Use line) and comparing the three Diagnostic Use means graphed (e.g., Fig. 3a) at the corresponding three levels of Diagnostic Use (low vs. moderate vs. high on the Diagnostic Use x-axis).

References Agle, B.R., Mitchell, R.K., Sonnenfeld, J.A., 1999. What matters to CEOs? An investigation of stakeholder attributes and salience, corporate performance and CEO values. Acad. Manage. J. 42 (5), 507–525. Ahn, H., 2001. Applying the balanced scorecard concept: an experience report. Long Range Plann. 34 (4), 441–461. Ahrens, T., Chapman, C.S., 2004. Accounting for flexibility and efficiency: a field study of management control systems in a restaurant chain. Contemp. Account. Res. 21 (2), 271–301. Amit, R., Schoemaker, P.J., 1993. Strategic assets and organizational rent. Strategic Manage. J. 14 (1), 33–46. Baldauf, A., Reisinger, H., Moncrief, W.C., 1999. Examining motivations to refuse in industrial mail surveys. J. Market Res. Soc. 41 (3), 345–353. Benner, M.J., Veloso, F.M., 2008. ISO 9000 practices and financial performance: a technology coherence perspective. J. Oper. Manage. 26 (5), 611–629. Berman, S.L., Wicks, A.C., Kotha, S., Jones, T.M., 1999. Does stakeholder orientation matter? The relationship between stakeholder management models and firm financial performance. Acad. Manage. J. 42 (5), 488–506. Birnberg, J.G., Turopolec, L., Young, S.M., 1983. The organizational context of accounting. Account. Org. Soc. 8 (2), 111–129. Bisbe, J., Batista-Foguet, J.M., Chenhall, R., 2007. Defining management accounting constructs: a methodological note on the risks of conceptual misspecification. Account. Org. Soc. 32 (7), 789–820. ˜ R., 2012. Using strategic performance measurement systems for Bisbe, J., Malagueno, strategy formulation: does it work in dynamic environments? Manage. Account. Res. 23 (4), 296–311. Bisbe, J., Otley, D., 2004. The effects of the interactive use of management control systems on product innovation. Account. Org. Soci. 29 (8), 709–737.

335

Bititci, U.S., Carrie, A.S., McDevitt, L., 1997. Integrated performance measurement systems: a development guide. Int. J. Oper. Prod. Manage. 17 (5), 522–534. Bititci, U.S., Turner, T., 2000. Dynamics of performance measurement systems. Int. J. Oper. Prod. Manage. 20 (6), 692–704. Bourne, M., Mills, J., Wilcox, M., Neely, A., Platts, K., 2000. Designing, implementing and updating performance measurement systems. Int. J. Oper. Prod. Manage. 20 (7), 754–771. Bourne, M., Pavlov, A., Franco, M., Lucianetti, L., Mura, M., 2013. Generating organisational performance: the contributing effects of performance measurement and human resource management practices. Int. J. Oper. Prod. Manage. 33 (11/12), 1599–1622. Braam, G.J., Nijssen, E.J., 2004. Performance effects of using the balanced scorecard: a note on the Dutch experience. Long Range Plann. 37 (4), 335–349. Bullinger, H.J., Kuhner, M., Van Hoof, A., 2002. Analyzing supply chain peformance using a balanced measurement method. Int. J. Prod. Res. 40 (15), 3533–3543. Burney, L., Widener, S.K., 2013. Behavioral work outcomes of a strategic performance measurement system-based incentive plan. Behav. Res. Account. 25 (2), 115–143. Chenhall, R.H., 2005. Integrative strategic performance measurement systems, strategic alignment of manufacturing, learning and strategic outcomes: an exploratory study. Account. Org. Soc. 30 (5), 395–422. Chenhall, R.H., Langfield-Smith, K., 1998. The relationship between strategic priorities, management techniques and management accounting: an empirical investigation using a systems approach. Account. Org. Soc. 23 (3), 243–264. Chenhall, R.H., Morris, D., 1995. Organic decision and communication processes and management accounting systems in entrepreneurial and conservative business organizations. Omega 23 (5), 485–497. Chia, A., Goh, M., Hum, S.H., 2009. Performance measurement in supply chain entities: balanced scorecard perspective. Benchmark Int. J. 16 (5), 605–620. Chirico, F., Sirmon, D.G., Sciascia, S., Mazzola, P., 2011. Resource orchestration in family firms: investigating how entrepreneurial orientation, generational involvement, and participative strategy affect performance. Strat. Entr. J. 5 (4), 307–326. Choi, J.W., Hecht, G.W., Tayler, W.B., 2013. Strategy selection, surrogation, and strategic performance measurement systems. J. Account. Res. 51 (1), 105–133. Corbett, C.J., Montes-Sancho, M.J., Kirsch, D.A., 2005. The financial impact of ISO 9000 certification in the United States: an empirical analysis. Manage. Sci. 51 (7), 1046–1059. Cousins, P.D., Lawson, B., Squire, B., 2008. Performance measurement in strategic buyer-supplier relationships: the mediating role of socialization mechanisms. Int. J. Oper. Prod. Manage. 28 (3), 238–258. Crabtree, A.D., DeBusk, G.K., 2008. The effects of adopting the balanced scorecard on shareholder returns. Adv. Account. 24 (1), 8–15. Craighead, C.W., Ketchen, D., Dunn, K.S., Hult, G., 2011. Addressing common method variance: guidelines for survey research on information technology, operations, and supply chain management. Eng. Manage. IEEE Trans. 58 (3), 578–588. Crook, T.R., Ketchen, D.J., Combs, J.G., Todd, S.Y., 2008. Strategic resources and performance: a meta- analysis. Strategic Manage. J. 29 (11), 1141–1154. Daft, R.L., Lengel, R.H., 1986. Organizational information requirements, media richness and structural design. Manage. Sci. 32 (5), 554–571. Davis, S., Albright, T., 2004. An investigation of the effect of balanced scorecard implementation on financial performance. Manage. Account. Res. 15 (2), 135–153. Deckop, J.R., Merriman, K.K., Gupta, S., 2006. The effects of CEO pay structure on corporate social performance. J. Manage. 32 (3), 329–342. Deal, T.E., Kennedy, A.A., 1982. Corporate Cultures. Reading MA:. Addison-Wesley. de Leeuw, S., van den Berg, J.P., 2011. Improving operational performance by influencing shopfloor behavior via performance management practices. J. Oper. Manage. 29 (3), 224–235. Donaldson, T., Preston, L.E., 1995. The stakeholder theory of the corporation: concept, evidence, and implications. Acad. Manage. Rev. 20 (1), 65–91. Estampe, D., Lamouri, S., Paris, J.L., Brahim-Djelloul, S., 2013. A framework for analysing supply chain performance evaluation models. Int. J. Prod. Econ. 142 (2), 247–258. Fitzgerald, L., Brignall, S., Silvestro, R., Voss, C., Robert, J., 1991. Performance Measurement in Service Businesses: Chartered Institute of Management Accountants London. Franco-Santos, M., Lucianetti, L., Bourne, M., 2012. Contemporary performance measurement systems: a review of their consequences and a framework for research. Manage. Account. Res. 23 (2), 79–119. Freeman, R.E., 1984. Strategic Management: A Stakeholder Approach. Pitman, Boston. Galbraith, J.R., 1974. Organization design: an information processing view. Interfaces 4 (3), 28–36. Garengo, P., Sharma, M.K., 2014. Performance measurement system contingency factors: a cross analysis of Italian and Indian SMEs. Prod. Plan. Control 25 (3), 220–240. Grafton, J., Lillis, A.M., Widener, S.K., 2010. The role of performance measurement and evaluation in building organizational capabilities and performance. Account. Org. Soc. 35 (7), 689–706. Grosswiele, L., Röglinger, M., Friedl, B., 2012. A decision framework for the consolidation of performance measurement systems. Decis. Support Syst. 54 (2), 1016–1029. Gunasekaran, A., Patel, C., Tirtiroglu, E., 2001. Performance measures and metrics in a supply chain environment. Int. J. Oper. Prod. Manage. 21 (1/2), 71–87.

336

X. Koufteros et al. / Journal of Operations Management 32 (2014) 313–336

Gunasekaran, A., Patel, C., McGaughey, R.E., 2004. A framework for supply chain performance measurement. Int. J. Prod. Econ. 87 (3), 333–347. Gunasekaran, A., Kobu, B., 2007. Performance measures and metrics in logistics and supply chain management: a review of recent literature (1995–2004) for research and applications. Int. J. Prod. Res. 45 (12), 2819–2840. Guo, S., Fraser, M.W., 2010. Propensity score analysis. Stat. Methods Appl.. Hamel, G., Prahalad, C.K., 1994. Competing for the Future: Breakthrough Strategies for Seizing Control of Your Industry and Creating the Markets of Tomorrow. Harvard Business School Press. Hamilton, S., Chervany, N.L., 1981. Evaluating information system effectiveness-Part I: Comparing evaluation approaches. MIS Quart. 5 (3), 55–69. Hansen, M.H., Perry, L.T., Reese, C.S., 2004. A Bayesian operationalization of the resource-based view. Strategic Manage. J. 25 (13), 1279–1295. Harrison, D.A., McLaughlin, M.E., Coalter, T.M., 1996. Context, cognition, and common method variance: psychometric and verbal protocol evidence. Organ. Behav. Hum. Dec. 68 (3), 246–261. Heckman, J.J., Ichimura, H., Todd, P.E., 1997. Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. Rev. Econ. Stud. 64 (4), 605–654. Heim, G.R., Ketzenberg, M.E., 2011. Learning and relearning effects with innovative service designs: an empirical analysis of top golf courses. J. Oper. Manage. 29 (5), 449–461. Helfat, C.E., Finkelstein, S., Mitchell, W., Peteraf, M., Singh, H., Teece, D., Winter, S.G., 2007. Dynamic Capabilities: Understanding Strategic Change in Organizations. Malden. MA: Blackwell. Henri, J.F., 2006a. Management control systems and strategy: A resource-based perspective. Account. Org. Soc. 31 (6), 529–558. Henri, J.-F., 2006b. Organizational culture and performance measurement systems. Account. Org. Soc. 31 (1), 77–103. Hitt, M.A., Ireland, D.R., Sirmon, D.G., Trahms, C.A., 2011. Strategic entrepreneurship: creating value for individuals, organizations, and society. Acad. Manage. Perspect. 25 (1), 57–75. Hofstede, G., 1980. Culture’s Consequences. Beverly Hills. Sage Publications. Holcomb, T.R., Holmes, R.M., Connelly, B.L., 2009. Making the most of what you have: managerial ability as a source of resource value creation. Strat. Manage. J. 30 (5), 457–485. Hoque, Z., James, W., 2000. Linking balanced scorecard measures to size and market factors: Impact on organizational performance. J. Manage. Account. Res. 12 (1), 1–17. Hult, G.T.M., Ketchen, D.J., Adams, G.L., Mena, J.A., 2008. Supply chain orientation and balanced scorecard performance. J. Manage. Issues 20 (4), 526–544. Ittner, C.D., Larcker, D.F., 1998. Are nonfinancial measures leading indicators of financial performance? An analysis of customer satisfaction. J. Account. Res. 36 (1), 1–35. Ittner, C.D., Larcker, D.F., Meyer, M.W., 2003a. Subjectivity and the weighting of performance measures: evidence from a balanced scorecard. Account. Rev. 78 (3), 725–758. Ittner, C.D., Larcker, D.F., Randall, T., 2003b. Performance implications of strategic performance measurement in financial services firms. Account. Org. Soc. 28 (7), 715–741. Kaplan, R.S., Norton, D.P., 1992. The balanced scorecard – measures that drive performance. Harvard Bus. Rev. 70 (1), 71–79. Kaplan, R.S., Norton, D.P., 2001. Transforming the balanced scorecard from performance measurement to strategic management: Part I. Accounting Horizons 15 (1), 87–104. Kaplan, R.S., Norton, D.P., 2008. Mastering the management system. Harvard Bus. Rev. 86 (1), 1–16. Keegan, D.P., Eiler, R.G., Jones, C.R., 1989. Are your performance measures obsolete? Manage. Account. 70 (12), 45–50. Ketchen, D.J., Wowak, K., Craighead, C.W., 2014. Resource gaps and resource orchestration shortfalls in supply chain management: The case of product recalls. J. Sup. Chain Manage. 50 (3), 6–15. Koufteros, X., 1999. Testing a model of pull production: a paradigm for manufacturing research using structural equation modeling. J. Oper. Manage. 17 (4), 467–488. Koufteros, X., Babbar, S., Kaighobadi, M., 2009. A paradigm for examining secondorder factor models employing structural equation modeling. Int. J. Prod. Econ. 120 (2), 633–652. Lehtinen, J., Ahola, T., 2010. Is performance measurement suitable for an extended enterprise? Int. J. Oper. Prod. Manage. 30 (2), 181–204. Lynch, R.L., Cross, K.F., 1992. Measure up!: The Essential Guide to Measuring Business Performance: Mandarin. Mahama, H., 2006. Management control systems, cooperation and performance in strategic supply relationships: a survey in the mines. Manage. Account. Res. 17 (3), 315–339.

Malhotra, N.K., Kim, S.S., Patil, A., 2006. Common method variance in IS research: a comparison of alternative approaches and a reanalysis of past research. Manage. Sci. 52 (12), 1865–1883. Margolis, J.D., Walsh, J.P., 2003. Misery loves companies: rethinking social initiatives by business. Admin. Sci. Quart. 48, 268–305. Melnyk, S.A., Stewart, D.M., Swink, M., 2004. Metrics and performance measurement in operations management: dealing with the metrics maze. J. Oper. Manage. 22 (3), 209–218. Mundy, J., 2010. Creating dynamic tensions through a balanced use of management control systems. Account. Org. Soc. 35 (5), 499–523. Neely, A.D., 1999. The performance measurement revolution: why now and what next? Int. J. Oper. Prod. Manage. 19 (2), 205–228. Neely, A.D., 2005. The evolution of performance measurement research. Int. J. Oper. Prod. Manage. 25 (12), 1264–1277. Neely, A.D., Gregory, M., Platts, K., 1995. Performance measurement system design: a literature review and research agenda. Int. J. Oper. Prod. Manage. 15 (4), 80–116. Neely, A.D., Mills, J., Platts, K., Richards, H., Gregory, M., Bourne, M., Kennerley, M., 2000. Performance measurement system design: developing and testing a process-based approach. Int. J. Oper. Prod. Manage. 20 (10), 1119–1145. Neely, A.D., Adams, C., Kennerley, M., 2002. The Performance Prism: The Scorecard for Measuring and Managing Business Success. Prentice Hall Financial Times London. Pavlov, A., Bourne, M., 2011. Explaining the effects of performance measurement on performance: n organizational routines perspective. Int. J. Oper. Prod. Manage. 31 (1), 101–122. Peikes, D.N., Moreno, L., Orzol, S.M., 2008. Propensity score matching. Am. Stat. 62 (3), 222–231. Porter, M.E., 1991. Towards a dynamic theory of strategy. Strat. Manage. J. 12 (2), 95–117. Rhody, J.D., Tang, T.L.P., 1995. Learning from Japanese transplants and American corporations. Public Pers. Manage. 24 (1), 19–32. Roberts, J., 1990. Strategy and accounting in a UK conglomerate. Account. Org. Soc. 15 (1), 107–126. Rothaermel, F.T., Hess, A.M., 2007. Building dynamic capabilities: innovation driven by individual-, firm-, and network-level effects. Organ. Sci. 18 (6), 898–921. Sarkis, J., Gonzalez-Torre, P., Adenso-Diaz, B., 2010. Stakeholder pressure and the adoption of environmental practices: the mediating effect of training. J. Oper. Manage. 28 (2), 163–176. Simons, R., 1991. Strategic orientation and top management attention to control systems. Strat. Manage. J. 12 (1), 49–62. Simons, R., 1994. How new top managers use control systems as levers of strategic renewal. Strat. Manage. J. 15 (3), 169–189. Simons, R., 1995. Control in an age of empowerment. Harvard Bus. Rev. 73 (2), 80–88. Simons, R., Dávila, A., Kaplan, R.S., 2000. Performance Measurement & Control Systems for Implementing Strategy: Text & Cases. Prentice Hall Upper Saddle River, NJ. Sirmon, D.G., Hitt, M.A., Ireland, D.R., Gilbert, B.A., 2011. Resource orchestration to create competitive advantage: breadth, depth, and life cycle effects. J. Manage. 37 (5), 1390–1412. Sirmon, D.G., Hitt, M.A., 2003. Managing resources: Linking unique resources, management and wealth creation in family firms. Entrepreneurship Theory and Practice 27 (4), 339–358. Sirmon, D.G., Hitt, M.A., Ireland, D.R., 2007. Managing firm resources in dynamic environments to create value: looking inside the black box. Acad. Manage. Rev. 32 (1), 273–292. Sprinkle, G.B., 2003. Perspectives on experimental research in managerial accounting. Account. Org. Soc. 28 (2), 287–318. Stede, W.A.V., Chow, d., Lin, C.W.T.W, 2006. Strategy, choice of performance measures, and performance. Behav. Res. Account. 18 (1), 185–205. Taylor, A., Taylor, M., 2013. Antecedents of effective performance measurement system implementation: an empirical study of UK manufacturing firms. Int. J. Prod. Res. 51 (18), 1–14. Tyre, M.J., Orlikowski, W.J., 1994. Windows of opportunity: temporal patterns of technological adaptation in organizations. Organ. Sci. 5 (1), 98–118. Vandenbosch, B., 1999. An empirical analysis of the association between the use of executive support systems and perceived organizational competitiveness. Account. Org. Soc. 24 (1), 77–92. Voelpel, S.C., Leibold, M., Eckhoff, R.A., 2006. The tyranny of the balanced scorecard in the innovation economy. J. Intellect. Cap. 7 (1), 43–60. Wowak, K.D., Craighead, C.W., Ketchen, D.J., Hult, G.T.M., 2013. Supply chain knowledge and performance: a meta-analysis. Decision Sci. 44 (5), 843–875. Widener, S.K., 2007. An empirical analysis of the levers of control framework. Account. Org. Soc. 32 (7), 757–788. Zimmerman, J.L., 2000. Accounting for Decision Making and Control:. Irwin/McGraw-Hill, Boston.