A Method for Measuring the Success of Collaborative ...

0 downloads 0 Views 531KB Size Report
Abstract. This paper describes a method specially devoted to quantitatively measure the success of collaborative university-industry R&D funded contracts ...
Available Available online online at at www.sciencedirect.com www.sciencedirect.com

ScienceDirect

Available online at www.sciencedirect.com Procedia Procedia Computer Computer Science Science 00 00 (2017) (2017) 000–000 000–000

ScienceDirect

www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia

Procedia Computer Science 121 (2017) 451–460

CENTERIS CENTERIS -- International International Conference Conference on on ENTERprise ENTERprise Information Information Systems Systems // ProjMAN ProjMAN -International Conference on Project MANagement / HCist International Conference International Conference on Project MANagement / HCist - International Conference on on Health Health and Social Care Information Systems and Technologies, CENTERIS / ProjMAN / HCist and Social Care Information Systems and Technologies, CENTERIS / ProjMAN / HCist 2017, 2017, 8-10 8-10 November November 2017, 2017, Barcelona, Barcelona, Spain Spain

A Method for Measuring the Success of Collaborative UniversityIndustry R&D Funded Contracts Gabriela Fernandes**, Eduardo B. Pinto, Madalena Araújo, Pedro Magalhães, Ricardo J. Machado School School of of Engineering, Engineering, University University of of Minho, Minho, Campus Campus de de Azurém, Azurém, 4804-533 4804-533 Guimarães, Guimarães, Portugal Portugal

Abstract Abstract This This paper paper describes describes aa method method specially specially devoted devoted to to quantitatively quantitatively measure measure the the success success of of collaborative collaborative university-industry university-industry R&D R&D funded contracts, which could be managed as a singular project or a program of projects. The method aims funded contracts, which could be managed as a singular project or a program of projects. The method aims to to measure measure the the success success throughout throughout the the program/project program/project lifecycle, lifecycle, combining combining both both retrospective retrospective (lagging) (lagging) and and prospective prospective (leading) (leading) performance performance indicators. indicators. The The method method uses uses tangible/specific tangible/specific outcomes outcomes as as performance performance indicators, indicators, like like patents patents or or publications, publications, as as well well as as intangible/subjective intangible/subjective performance performance indicators indicators such such as as social social relationships, relationships, organizational organizational arrangements arrangements or or motivations. motivations. The proposed method was conceived by conducting a thorough review of the published The proposed method was conceived by conducting a thorough review of the published literature literature on on this this area, area, and and by by analyzing, analyzing, as as case case studies, studies, two two consecutive consecutive R&D R&D collaborative collaborative funded funded programs, programs, between between University University of of Minho Minho and and Bosch Bosch Car Car Mutimedia, Mutimedia, amounting amounting to to an an overall overall investment investment of of over over 70 70 million million Euros, Euros, from from 2012 2012 to to 2018. 2018. © © 2017 2017 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. © 2017 The under Authors. Published by B.V. committee Peer-review under responsibility responsibility of Elsevier the scientific scientific committee of of the the CENTERIS CENTERIS -- International International Conference Conference on on ENTERprise ENTERprise Peer-review of the Peer-review Systems under responsibility the scientific committee of the MANagement CENTERIS - International Conference on ENTERprise Information ProjMAN -- of International Conference on Project Project HCist -- International International Conference on Information Systems // ProjMAN International Conference on MANagement // HCist Conference on Information Systems / ProjMAN International Conference on Project MANagement / HCist International Conference on Health and and Social Social Care Care Information Information Systems Systems and and Technologies. Technologies. Health Health and Social Care Information Systems and Technologies. Keywords: Keywords: Project Project success; success; program program management; management; R&D R&D university-industry university-industry collaborations collaborations

* * Corresponding Corresponding author. author. E-mail E-mail address: address: [email protected] [email protected] 1877-0509 1877-0509 © © 2017 2017 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. Peer-review under Peer-review under responsibility responsibility of of the the scientific scientific committee committee of of the the CENTERIS CENTERIS -- International International Conference Conference on on ENTERprise ENTERprise Information Information Systems Systems // ProjMAN ProjMAN -- International International Conference Conference on on Project Project MANagement MANagement // HCist HCist -- International International Conference Conference on on Health Health and and Social Social Care Care Information Information Systems Systems and Technologies. Technologies. and

1877-0509 © 2017 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the scientific committee of the CENTERIS - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies. 10.1016/j.procs.2017.11.061

452 2

Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

1. Introduction Firms seek innovation by making collaborations with universities [1-4]. Collaborative university-industry R&D projects are among the main public policy actions to promote innovation [5]. Research and development (R&D) is increasingly viewed as an important input to innovation [6]. Frascati Manual [6, pp.28] defines R&D as “…. creative and systematic work undertaken in order to increase the stock of knowledge –including knowledge of humankind, culture and society – and to devise new applications of available knowledge." To provide guidance on what is and what is not an R&D activity, the Frascati Manual identifies five criteria: the activity should be: novel, creative, uncertain in its outcome, systematic and transferable and/or reproducible. The attractiveness of university-industry collaborations is recognized as fostering positive impact on both partners and also on the countries where those collaborations occur [5, 7, 8]. University-industry collaborations are considered to be an engine towards knowledge-based societies and economies, threatened by increased global competition, with ongoing economic issues and high levels of unemployment. The university-industry collaborations are on the agenda of the political decision makers. The different states, including the European Union, support initiatives such as collaborative R&D projects through public funds, thereby stimulating innovation [3, 9]. Industry embraces such collaborations for a number of reasons, namely to increase their R&D investments through public funding, to obtain higher performances in their innovation initiatives, to share risks and uncertainties [10, 11] at lower cost [7, 12, 13] and to increase the resources’ skills and competences to outperform the competition in the global market [8, 14]. University seeks, for example, to acquire funds to hire human resources and purchase cuttingedge equipment [3, 13], develop new case study teaching material [7] and to increase the capacity to attract new students [13]. It is a demanding task to achieve these benefits due to the existing “cultural gap” between both partners, requiring well organized and managed R&D collaborations [10], which can be managed as a program or a project. Program is a set of projects that are somehow related and contribute to the same goal [15]. Differences are found in the way programs and projects are managed, namely in response to uncertainty and change [16]. With the rise of collaborative R&D programs/projects between industry and university [17], the need for monitoring and assessing the outcomes of such initiatives has increased as well [5]. How to measure the success of such university-industry R&D collaborations is an important challenge, and few attempts have been made [18]. A two-fold systematic measurement is necessary: to assess programs/projects a posteriori, to find out which benefits have been and are being provided by the collaboration, matching them with the initial benefits expectations [5, 17] and, not least important, to monitor the on-going process to outperform the competition in the global market to enable adjustment and improvement [8, 17]. Mainly based on Perkmann, Neely and Walsh [17] suggested performance measurement system for measuring the success of university-industry alliances, a quantitative method was specially developed to assess the success of collaborative university-industry R&D funded programs/projects, combining both retrospective (lagging) and prospective (leading) indicators. The method uses tangible/specific outcomes as indicators, like patents, publications or licenses [1, 5, 17], as well as intangible/subjective indicators such as social relationships, organizational arrangements or motivations [1, 17], although acknowledging the complexity of reaching a general agreement on the criteria to be used to evaluate these subjective indicators. The method also includes variables that allow to judge the success of the program/project not only a posteriori, i.e. after the program/project closure, but also a priori, during the program/project lifecycle [5]. This paper has the main objective of presenting the abovementioned method to quantitatively measure the success of collaborative university-industry R&D funded programs/projects, including the steps of its application. The case studies used as research method were the HMIExcel and IC-HMI R&D collaborative funded programs, between University of Minho (UMinho) and Bosch Car Multimedia in Portugal (Bosch). The case studies were selected for convenience. In fact, this research work resulted from UMinho and Bosch partnership’s need to have a quantitative method to measure and compare the success of their R&D collaborative programs/projects. HMIExcel, closed in 2015 (IC-HMI stills in progress), was considered a successful university-industry R&D collaborative funded program as reported in Pinto el al.[19]. However, its success was mainly subjectively perceived



Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

453 3

by the different program stakeholders. A well established method to measure quantitatively its success was missing, preferably a method that could be used to compare the success between different initiatives. Such a method could also be adopted by funding entities to assess the success of the programs/ projects that they support financially. This paper is organized as follows. The second section summarizes the existing literature on measurement frameworks in a university-industry alliance context. The third describes the background of the two case studies used in this research study. The fourth and fifth sections discuss the proposed quantitatively method for measuring the success of collaborative R&D funded programs/projects, based mainly on the performance measurement system developed by Perkmann et al. [17]. The adaptations made, in the stages, process components and performance indicators, are explained in section four, and section five focuses on the steps for the method application. Finally, the main reflective conclusions that emerged from this study, as well as the suggestions for future work are discussed. 2. Relation to existing theories and work A key performance indicator points out what should be done to increase performance [20]. Grimaldi and Von Tunzelmann [5] assessed the UK government´s LINK program aimed at promoting pre-competitive collaboration between firms and universities, with the elaboration of performance indicators that take into account both the direct and the indirect outcomes of the partnership. Particularly, they proposed measuring direct outcomes, such as patents and publications, and indirect outcomes represented by commercial exploitation of patents and scientific results and follow-up opportunities generated by alliances. Grimaldi and Von Tunzelmann [5] also assessed the degree of matching between the initial objectives and its final achievements. They concluded that it is necessary to develop a more reliable and exhaustive measurement framework that could help government control their project success, not only a posteriori, i.e. after the program closes, but also a priori, throughout the project’s execution. Based on case studies and published research, Barnes, Pashby and Gibbons [7] identified the management ‘success’ factors for both parties and combined those factors into a single framework tool tested through a further case study. The framework developed fosters an awareness of the key issues aff ecting the success of university-industry partnerships and stimulate the manager to take appropriate and timely action to prevent problems before they arise. Barnes et al. [7] argue that strong collaborations are based on strong personal and organizational relationships. Perkmann et al. [17] proposed a ‘success map’ to measure the performance of university-industry alliances, based on the existing literature on university-industry relationships. The ‘success map’ distinguishes between four different process stages: ‘inputs’, ‘in-process activities’, ‘outputs’ and ‘impacts’, and proposes to measure process components for each of the four process stages, providing guidance for designing both prospective and retrospective indicators for the performance management system and also the ingredients by which positive outcomes are achieved within university-business alliances. For example, in the ‘input’ stage, as first process component or factor to achieve success, Perkmann et al. [17] suggest that the collaborations need to mobilize sufficient and capable resources. Moreover, the involvement of highly qualified and motivated academic researchers is important to ensure the success, since the ‘cultural gap’ between both parties generally reduces the academic researchers’ enthusiasm to work with industry. Those factors – availability of resources and the presence of highly qualified and motivated academic researchers – are requirements for the execution of ‘in-process activities’ stage. The result of the activities of the ‘in-process’ stage generates outputs. For example, at the ‘outputs’ stage is pointed out the new scientific knowledge generated (publications), the development of new technologies (patents) and skilled staff as a result of training and learning opportunities for both parties. Finally, the ‘impact’ stage – the availability of new technologies can be a source of new products or process improvement innovations, the new scientific knowledge can generate new ideas that can be developed into new R&D projects, the availability of a more skilled and trained staff within the industrial partner should lead to accumulating human capital that can increase its in-house expertise. Perkmann et al. [17] also indicate several performance indicators – metrics that measure each process component of each process stage of the ‘success map’. Seppo and Lilles [4] also proposes a set of performance indicators and its usage to assess the ‘input’, ‘output’ and ‘impact’ stages of university-industry alliances.

454 4

Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

3. Case study The selected case studies, named HMIExcel and IC-HMI, resulted from a strategic partnership established between University of Minho (UMinho) and Bosch Car Multimedia (Bosch) in Portugal, regarding the development and production of advanced car multimedia solutions. UMinho is currently among the most prestigious institutions of higher education in Portugal, and is in the top 100 universities under 50 years old (75th position) worldwide. Founded in 1973, UMinho is engaged in the valorization of the knowledge-research chain, development and innovation. UMinho stands out for the volume of publications and for the number of patent requests, as well as for the high level of collaboration with industry, with around 250 R&D contracts signed annually with industry players. Bosch is located in Braga, Portugal, since its foundation in 1990. Over the years, Bosch became one of the automotive industry’s biggest suppliers and the leading plant of the Car Multimedia division of Bosch Group. Presently, Bosch produces a wide portfolio of products, such as navigation systems, instrumentation systems, car radios, steering angle sensors, and electronic controllers. In 2015, Bosch achieved a turnover of around 516 million Euros, 99% export driven, and employed around two thousand collaborators. HMIExcel, managed as a program, amounting to an investment of about €19 million, and running from May 2013 to June 2015, was partly funded through the Portuguese incentives for R&D and involved around 300 researchers and collaborators, from UMinho and Bosch, respectively. The IC-HMI program foresees an investment of €54.7 million, from July 2015 to June 2018, with the admission of 94 new staff dedicated to R&D in Bosch and 173 new researchers in UMinho. In conducting the case studies, three research methods were applied: document analysis, participative observation, and several unstructured focus groups – where the group discussion is more free flowing, in order to discuss the suitable method to measure the success of collaborative university-industry R&D funded programs/projects. The focus groups were conducted between January and April 2017, with different HMIExcel and IC-HMI’s stakeholders, namely the program manager, two project managers, and four PMO Project Management Office (PMO) officers. There is a high proportion of PMO Officers participation, for two reasons: (1) their role, since they are the main elements responsible for improving and supporting program and project management, including how to measure its success; and (2) more availability - they are more aware of the importance of measuring the success of the programs/projects. The focus group moderator (researcher) used, as auxiliary material, the performance measurement system proposed by Perkmann el al. [17] to measure university-industry alliances to stimulate the opinion of the focus groups participants. 4. Performance indicators adopted by the method The proposed method for measuring success of collaborative R&D funded contracts was achieved by conducting a thorough review of current research in this area through published literature. Perkmann et al. [17] was selected as the main theoretical foundation for the development of the method. It was selected taking into account similitude of objectives and robustness. The work of Seppo and Lilles [4] also added to the development of the measurement method, by pointing out performance indicators. As abovementioned, this research work resulted from a UMinho and Bosch partnership’s need for a quantitative method to measure and compare the success of their collaborative R&D funded programs; therefore, it was decided to link the measurement method and the Program and Project Management lifecycle adopted by UMinho and Bosch to manage their collaborative R&D programs, based on Fernandes et al. [21, 22] work. The Program and Project Management lifecycle from Fernandes et al. [21, 22] is divided in four phases: ‘program preparation’; ‘program initiation’; ‘program benefits delivery’; and ‘program closure’, therefore, it was linked with the four phases suggested by Perkmann et al. [17] for building the ‘success map’ stages, as shown in Figure 1. Most of the methods referred in the literature relate to the same ‘inputs’, ‘outputs’ and ‘impacts’ adopted by Perkmann et al. [17]. Table 1 summarizes the performance indicators adopted by the method for measuring success of collaborative R&D funded contracts, grouped by program management phase and process component.



Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460

455

Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

5

Perkmann et al. stages Inputs

In process activities

Outputs

Impacts

Program Benefits delivery

Program Closure

Post-Program

Fernandes et al. phases Program Preparation

Program Innitiation

Postprogram

Program closure

Program benefits delivery

Program preparation

Fig. 1. Linkage between Perkmann et al. and Fernandes et al. phases. Table 1. Performance indicators adopted by the proposed method for measuring the success of collaborative university-industry R&D funded programs/projects. Phase Phase Component Process component PI Performance indicator Previously weigh* weight* weight* suggested by 25% 15% Researchers’ capability 40% Scientific impact (researchers’ h-index) [4, 17, 23] 30% [4] % of researchers involved with past experiences in UI collaborative R&D projects 30% % of senior researchers (not research assistants) 15% Researchers’ motivation 100% % of research income from industry [4, 17, 24] 15% 50% [4] Industry collaborators’ % of Industry team collaborators with a capability postgraduate degree or a higher level of qualification (postgraduate, master or PhD) 50% [7] % of collaborators involved in past experiences of UI collaborative R&D programs/projects 15% 100% Industry collaborators’ Existence of innovation policy motivation 20% [18] Opportunities/ challenges 100% Nº of opportunities/challenges 20% Applied research 100% % of projects ideas with joint objective setting [17] ** 5% 100% 100% [10, 25] Governance established Joint governance model setting 50% 25% Collaboration intensity 15% % Steering committee meetings (performed/planned) 20% % result-sharing events [4, 26] 25% % Workplace meetings 20% % Progress meetings 20% % Technical team meetings 15% Technology 100% Nr. of complete standard patent or other IP [4, 17] applications 15% New knowledge 50% Nr. of publications [4, 17] 50% [4, 27] Nr. of joint publications 10% 100% [10] Management and % Technical deliverables (reports or organization quality prototypes) executed on time 10% 100% [28] Governance embedment Governance model embedment 25% Human capital 20% [4] % Recruitment of PhD's by industry 20% % Recruitment of research assistants [4, 17] (graduates) by industry partners 20% [4] Perception of the impact of the program in the development of the academic or professional career 20% Structure of collaborators' qualification 20% [4, 17] Nr. of master's and PhD degrees 15% 50% Innovations 50% Nr. of new products [17] 50% Nr. of new process improvements [17] 35% Solution concepts 50% Nr. of new solution concepts [17] 50% Increase TRLs 15% New project ideas 100% Nr. new project ideas [17] 5% 30% Technology achievement 100% Nr. of patents granted [4, 17] 40% 100% [4] Turnover Turnover growth 30% 100% Partnership Value of new collaborative research projects sustainability generated * Just an example of possible weights to attribute to each program management phase, process component and performance indicator ** Program initiation

456 6

Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

The text highlighted in bold indicates the newly introduced performance indicators and process components, not included in the Perkmann et al. [17] performance measurement system, which resulted from the analysis of the two case studies used, HMIExcel and IC-HMI and other important references such as Seppo and Lilles [4]. Table 1 also indicates the author(s) that previously suggested the performance indicator (PI) adopted by the method. The method combines both retrospective (lagging) and prospective (leading) performance indicators, which may be used during the whole program management lifecycle. For example, at the end of the ‘program preparation’ phase, the PIs assigned to this phase can be assessed to measure the success until that phase, but during ‘program benefits delivery’ phase the indicators assigned to the ‘program preparation’ and to the ‘program initiation’ phases should be again reassessed, and might get a different value from the one obtained at the first assessment, while the PIs for the ‘program benefits delivery’ phase are assessed for the first time. In other words, a PI allocated to the first program management phase, ‘program preparation’, starts being measured since this first program management phase takes place, but continues to be measured during the whole program management lifecycle. Although, during the development of the method, a major concern was to create a quantitative method, as objective as possible, some PIs can only be assessed subjectively. Quantitative PIs may be assessed through the use of bibliographic search engines and surveying program/project members (e.g., to identify the researcher past experience in UI collaborative R&D projects), qualitative PIs may be assessed by the program/project manager, through his/her expert judgement or the analysis of university/industry reports. The weight of each performance indicator, process component and program management phase, most probably, will be different, depending on each one’s relevance to the success of the whole program, and their context. Based on the two research case studies, a set of weights is proposed in Table 1, just as an exemplification, to give guidance to the first users of the method. The weights should be given in accordance with the context of each specific program/project that might adopt this method to measure its success (see Section 5). 4.1. Program preparation ‘Program preparation’ aims to design the R&D program and to strive for the financial support of the program [21]. This program management phase encompasses six process components: 1) ‘researchers’ capability’; 2) ‘researchers’ motivation’; 3) ‘industry collaborators’ capability’; 4) ‘industry collaborators’ motivation’; 5) ‘opportunities/challenges’; and 6) ‘applied research’. Of these process components, 3 of them (1, 2 and 6) are proposed in Perkmann et al. study [17], and the three remaining ones are suggested by the authors of the present study. Concerning ‘Researchers’ capability’, Perkmann et al. [17] suggest measuring the ‘scientific impact’ through the citation count record, but alternatively, the researchers’ h-index can be used [23]. Nevertheless, two additional indicators are suggested to measure ‘researchers’ capability’: researchers past experience in UI collaborative R&D projects and percentage of involvement of senior researchers (since, often, too many young assistant researchers are hired for these programs/projects). The main input factor for any type of R&D activity is R&D resources [4, 17, 24]. R&D resources are critical for the innovation process since conducting R&D demands proper equipment, staff recruitment and services contracting to achieve meaningful outputs. In this sense, it is suggested that the process component ‘Researchers’ motivation’ should be measured by the percentage of research income from industry – assuming that the higher this value, the higher the researchers’ motivation. Perkmann et al. [17] only suggested the process components ‘researcher’s capability’ and ‘researchers’ motivation’. However, this is about effective collaborative R&D programs/projects and, therefore, ‘industry collaborators’ capability’ and ‘industry collaborators’ motivation’ are also important. There are, namely, certain features of a firm that have an effect in its ability to use the knowledge transferred from the university [21]. For example, the firm’s collaborators qualifications, as well as both prior experience of working together and prior experience on the broadest sense of university-industry collaborations, are important factors for the success of university-industry R&D programs/projects [7]. Another important aspect observed and discussed within the focus groups were the ‘opportunities/challenges’. For an R&D initiative to be a success there is necessarily a certain number of new project ideas to study; otherwise, there



Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

457 7

will be no necessary input to design an R&D program/project. Perkmann et al. [17] also suggest the importance of developing ‘applied research’, and, therefore, the importance of taking into account the degree to which detailing objectives and potential solutions of each initial project idea is jointly determined by industry and university members. 4.2. Program initiation ‘Program initiation’ aims to guarantee the initial planning of the program and the alignment of the program’s objectives and outcomes with those of the stakeholders that will effectively get involved in the program execution, and to establish the governance model of the program/project [21]. This program management phase is composed of only one process component - ‘governance established’. Due to the unpredictable nature of R&D programs/projects, and the prevailing “cultural gap” between university and industry, it is of utmost importance to establish strong agreements between all parties for the achievement of the program/project objectives [7, 10]. The establishment of a governance model to support the overall program management and the management of the constituent projects, that clearly defines the objective, roles and responsibilities, and the program’s work plan, can be seen as a good practice that will facilitate program management, commitment, trust, team spirit, and communication on both sides of the university-industry alliance [10, 25, 29]. 4.3. Program benefits delivery ‘Program benefits delivery’ aims to manage in an integrated way the program’s constituent projects in order to deliver the expected program benefits [21]. This phase is supported by six process components: 1) ‘collaboration intensity’; 2) ‘technology’; 3) ‘new knowledge’; 4) ‘management and organization quality’; 5) ‘governance embedment’; and 6) ‘human capital’. Of these process components, 4 of them (1, 2, 3 and 6) are proposed on Perkmann et al.’s performance measurement system, and 4 and 5 are suggested by the authors of the present study/paper. Perkmann et al. [17] suggest assessing the ‘collaboration intensity’ by means of a constructed metric obtained by operationalizing the intensity of interaction between the partners, such as the periodic exposure of intermediate results to all parties’ members. The interaction between academic researchers and industry through a wider set of channels is more likely to build capabilities necessary to fill the gap between science and its application [26]. Therefore, the frequency and variety of the types of interactions between the partners are bound to enable the transmission of knowhow [17, 26]. For example, result-sharing events, and steering committee, workplace, progress, and technical meetings. Perkmann et al. [17] also suggest that high-quality research should make it possible to generate new scientific knowledge. Likewise, conducting relevant research should allow the development of new technologies. Therefore, the number of patent applications submitted and the number of publications and joint publications are the PIs suggested to measure the components ‘technology’ and ‘new knowledge’, respectively. A new process component, ‘management and organization quality’, is considered in the method. As argued by Barnes et al. [10], the key to successful collaborations lies on how well it is managed, and an easy way to measure it is through the percentage of technical deliverables (reports or prototypes) executed on time. This PI is also an indication of how well established and embedded is the governance model in the consortium organization. In the ‘human capital’ process component, four new PIs are introduced: 1) the percentage of recruitment of PhD holders that get hired by the industry – the creation of scientific employment; 2) the perception by the program members of the impact of the program/project in the development of their academic or professional careers; 3) the structure of collaborators’ qualifications – if they improve or not; and 4) the number of master’s and PhD’s degrees, taking into account the dissertations and theses obtained under the R&D program context. 4.4. Program closure ‘Program closure’ aims to execute a controlled closure of the program and determine if the collaboration can be sustained [21], namely, if the program/project was a success and new programs/projects can be initiated in a near future. All the process components proposed for the ‘program closure’ are supported by the Perkmann et al. study

Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

458 8

[17], except one; i.e. only one new PI is introduced by the authors of the present paper. The availability of new technologies can be a source of product (service) innovation or process innovation. Finally, the new scientific knowledge and new technologies can generate new solutions, which are an intermediate step between plain ideas and commercially exploitable innovations. The benefit attained with these new solutions can be assessed by measuring the increase of their Technology Readiness Level (TRL), from the project initiation until its closure. Lastly, in order to guarantee university-industry collaboration sustainability, it is also important to assess the process component ‘New project ideas’ generated during the on-going R&D program, which can be used as the foundation of a proposal for a new R&D funding program. 4.5. Post-program In the ‘Post-program’ phase, it is suggested assessing three process components: 1) ‘technology achievement’; 2) ‘turnover’; and 3) ‘partnership sustainability’. For measuring ‘technology achievement’, Perkmann et al. [17] propose the number of patents actually deployed or licensed, however the authors of this developed method prefer the number of patents granted. For measuring ‘turnover’, the increase of the sales should be taken into account. For the process component ‘partnership sustainability’, it is suggested assessing the value of new collaborative research programs/projects generated as a relevant indicator. 5. Steps for applying the method The method developed for measuring the success of collaborative university-industry R&D funded programs/projects, as a management tool, is guided by objectives and its generic process is known as E-4 (Establish, Extract, Evaluate and Execute), based on the Deming cycle (Plan, Do, Check and Act) [30]. E-4 is a form of performance management that implements improvements through performance measurement in order to achieve success [30], which is the fundamental principle of the developed method. Therefore, it is proposed that the implementation of the management method should be the program/project manager’s responsibility, supported eventually by a Project Management Office (PMO) team, if there is one in place, as was the case with HMIExcel and IC-HMI programs. Table 2 summarizes a set of steps for the application of the method in order to give practical guidance on how university-industry partners could make use of such tool. Table 2. Steps for the application of the method for measuring the success of R&D collaborative programs/projects. Steps 1

st

Description Assess the relevance of each program/project phase, process component and performance indicator, defining percentage weights. (For example, assign weights among the present and past program/project phases; for each phase, assign a combined weight of 100% among its process components; and, for each process component, once again, assign a combined percentage of 100% among its performance indicators)

Responsibility Program/project manager and approved by the steering committee

2nd

Collect the measurement data for each performance indicator

Program/project manager or a supportive PMO team

3rd

Score each performance indicator in a scale of 1 to 5 (5 – Very high; 4 – High; 3 – Medium; 2 – Low; 1 – Very low), based on the measurement results collected for each PI

Program/project manager

4th

Select the performance indicators which scored below the expectations and prioritize

Program/project manager

5th

Answer the question: what are the main action(s) to take for improving the performance indicators? Define action(s)

Program/project manager with the PMO team support

6th

Depending on the assessment period established (6 or 12 months), re-measure each indicator, i.e., go back to the 2nd step. The organization should stop this process 1-3 years after the ‘program closure’, depending on the impacts time horizon

-

The first step of the method’s application is to assess the relevance of each program/project phase, process



Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

459 9

component and performance indicator. In Table 1, an example of the possible weights is given under the context of both case studies used in this research; however, the weights to be given depend on the specific context of the collaborative R&D program/project, and, therefore, should be assigned by the elements in charge of measuring the program’s/project’s success. In the particular case of the program/project management phase weighing, if for example it is decided to take a measurement at the end of the ‘program preparation’, the weight to be assigned to this phase is 100% and the remaining phases 0%. Note that the criteria for measuring the success of a program/project should be established at the beginning of the program/project [31] and these success criteria should not change during its life cycle. For example, the weights assigned to the process components and PIs should not change; the weights assigned to the program/project management phases are, of course, the exception. The second step implies that the program/project manager, or his/her supporting PMO team, collect all the data necessary to measure each performance indicator. If the data collected will not allow measuring some PIs, the program/project manager (3rd step) will need to attribute subjectively a score to the PI, based on his/her expert judgment. In particular, in these cases, the program/project manager might consider getting the perceptions of different stakeholders to assign a score to the PIs. The perception of different stakeholders might be collected, for example, through surveys or focus groups. It is important that the first and third step should be conducted in different moments, so that the scoring of each performance indicator does not bear any influence on the weight assigned to it and vice versa. The third step, for most of the PIs, can be directly applied, assuming a simple rule of dividing equally the whole percentage scale of accomplishment into five intervals, relating each one with the five score levels: measurement percentage 0-20% score 1; 20-40% - score 2; 40-60% - score 3; 60-80% - score 4; and 80-100% - score 5. The method combines both retrospective (lagging) and prospective (leading) PIs, therefore allowing proactive management, enabling any corrective actions, if needed, so that, at the end, the program/project may be labeled as a success. Therefore, the PIs that scored below the expectations should be selected and prioritized (4 th step), in order to establish action(s) that would improve them (5 th step). The method could be applied during the whole program/project management lifecycle, and for every different moment when the method is scheduled to be applied and the success of the program/ project to be assessed (6 th step), it is necessary to go back to the second step and to re-measure each PI. Based on previous researchers’ experiences on collaborative R&D funded programs, it is suggested that the method should be applied for the first time at the end of the ‘program preparation’, and, after that, 4 to 5 times more: 2-3 assessments during ‘program benefits delivery’; at the end of ‘program closure’, and 1-3 years after ‘program closure’ – the ‘post-program’, depending on the context and program impacts time-horizon. The assessments should occur with six-months to one-year intervals. Less than 6 months could be counterproductive, because the method, in spite of being simple to use, consumes time to collect all the data necessary for measuring each PI. Remember that, for example, at the end of the ‘program preparation’ phase, the PIs attributed to this phase are assessed to measure the success until ‘program preparation’, but, during ‘program benefits delivery’, the PIs of the ‘program preparation’ and ‘program initiation’ should be again reassessed, while the PIs of the ‘program benefits delivery’ are assessed for the first time. 6. Conclusions The research reported in this paper builds knowledge in the domain of program/project management in the context of collaborative university-industry R&D funded contracts. A detailed method to measure and track the success of university-industry R&D collaborations throughout the program/project lifecycle is presented, which could be used to compare the success between different initiatives with university and industry partners, but also could be adopted by funding entities to assess the success of the programs/ projects that they support financially. The method is mainly an instrument of the program/project manager for managing proactively the program/project and its results are an excellent form for them to communicate with the program/project stakeholders. For future research, it would be useful to apply the method in real case studies. The application of the method, in order to discuss the process of implementation and its implications, tends to be a lengthy subject and almost impossible to report on a single paper; therefore, this paper focuses solely on the development of the method.

460 10

Gabriela Fernandes et al. / Procedia Computer Science 121 (2017) 451–460 Gabriela Fernandes et al. / Procedia Computer Science 00 (2017) 000–000

Acknowledgements This research is sponsored by the Portugal Incentive System for Research and Technological Development. Project in co-promotion nº 002814/2015 (iFACTORY 2015-2018) and by the FCT (SFRH/BPD/111033/2015). References 1. Perkmann M, Walsh K. University–industry relationships and open innovation: Towards a research agenda. International Journal of Management Reviews 2007;9(4):259-280. 2. D’Este P, Perkmann M. Why do academics engage with industry? The entrepreneurial university and individual motivations. The Journal of Technology Transfer 2011;36(3):316-339. 3. Perkmann M, King Z, Pavelin S. Engaging excellence? Effects of faculty quality on university engagement with industry. Research Policy 2011;40(4):539-552. 4. Seppo M, Lilles A. Indicators measuring university-industry cooperation. Discussions on Estonian Economic Policy 2012; 20(1):204-225. 5. Grimaldi R, Von Tunzelmann N. Assessing collaborative, pre-competitive R&D projects: the case of the UK LINK scheme. R&D Management 2002;32(2):165-173. 6. OECD. Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development, The Measurement of Scientific, Technological and Innovation Activities. 7 th ed. Paris: OECD Publishing; 2015. 7. Barnes T, Pashby I, Gibbons A. Effective University – Industry Interaction: A Multi-case Evaluation of Collaborative R&D Projects. European Management Journal 2002;20(3):272-285. 8. Healy A, Perkmann M, Goddard J, Kempton L. Measuring the impact of university–business cooperation. Final Report. Luxembourg: Publications Office of the European Union; 2014. 9. Etzkowitz H. Innovation in Innovation: The Triple Helix of University-Industry-Government Relations. Social Sci Inform 2003;42(3):293-337. 10. Barnes TA, Pashby IR, Gibbons AM. Managing collaborative R&D projects development of a practical management tool. International Journal of Project Management 2006;24(5):395-404. 11. Cravens K, Piercy N, Cravens D. Assessing the performance of strategic alliances: matching metrics to strategies. European Management Journal 2000;18(5):529-541. 12. Ankrah S, Al-Tabbaa O. Universities–industry collaboration: A systematic review. Scandinavian Journal Management 2015;31(3):387-408. 13. De Fuentes C, Dutrénit G. Best channels of academia–industry interaction for long-term benefit. Research Policy 2012;41(9):1666-1682. 14. Sherwood AL Covin JG. Knowledge Acquisition in University–Industry Alliances: An Empirical Investigation from a Learning Theory Perspective. Journal of Product Innovation Management 2008;25(2):162-179. 15. Pellegrinelli S. What's in a name: Project or programme? International Journal of Project Management 2010;29(2):232-240. 16. Project Management Institute. The Standard for Program Management. 3rd ed. Pennsylvania: Project Management Institute, Inc.; 2013 17. Perkmann M, Neely A, Walsh K. How should firms evaluate success in university–industry alliances? A performance measurement system. R&D Management 2011; 41(2):202-216. 18. Iqbal AM, Khan AS, Iqbal S, Senin, AA. Design of Success Criteria Based Evaluation Model for Assessing the Research Collaboration Between University and Industry. International Journal of Business Research and Management 2011;2(2)59-73. 19. Pinto EB, Fernandes G, Oliveira J, Araújo M, Pontes A, Machado RJ. Managing a Successful University-Industry Collaborative Funded Innovation Programme. XXVII ISPIM Innovation Conference Proceedings, Porto (Portugal) 2016;1-13. 20. Parmenter D. Key Performance Indicators: Developing, Implementing and Using Winning KPIs. New Jersey: John Wiley & Sons, Inc.; 2010. 21. Fernandes G, Pinto EB, Machado RJ, Araújo M, Pontes A. A Program and Project Management Approach for Collaborative University-industry R&D Funded Contracts. Procedia Computer Science 2015;64:1065-1074. 22. Fernandes G, Pinto EB, Araújo M, Pontes A, Machado, RJ. Perceptions of Different Stakeholders on Managing Collaborative UniversityIndustry R&D Funded Contracts. Procedia Computer Science 2016;100:878-887. 23. Czarnecki L, Kaźmierkowski MP, Rogalski A. Doing Hirsch proud; shaping H-index in engineering sciences. Bulletin of the Polish Academy of Science Technical Sciences 2013;61(1):5-21. 24. Adams R, Bessant J, Phelps R. Innovation management measurement: A review. International Journal Management Reviews 2006;8(1):21-47. 25. Chiesa V, Frattini F, Lazzarotti V, Manzini R. Performance measurement in R&D: exploring the interplay between measurement objectives, dimensions of performance and contextual factors. R&D Management 2009;39(5):487-519. 26. D’Este P, Patel P. University–industry linkages in the UK: What are the factors underlying the variety of interactions with industry? Research Policy 2007;36(9):1295-1313. 27. Tijssen RJ, Van Leeuwen TN, Van Wijk E. Benchmarking university-industry research cooperation worldwide: performance measurements and indicators based on co-authorship data for the world's largest universities. Research Evaluation 2009;18(1):13-24. 28. Fernandes G, Ward S, Araújo M. Developing a Framework for Embedding Useful Project Management Improvement Initiatives in Organizations. Project Management Journal 2014;45(4):81–108. 29. Mora-Valentin EM, Montoro-Sanchez A, Guerras-Martin LA. Determining factors in the success of R&D cooperative agreements between firms and research organizations. Research Policy 2004;33(1):17-40. 30. Ebert C. Software Project and Process Measurement. The Journal of Defense Software Engineering 2009; July/August:22-27. 31. Project Management Institute. A Guide to the Project Management Body of Knowledge. 5th ed. Pennsylvania: PMI, Inc.; 2013.