KES-2003 Conference - CiteSeerX

5 downloads 23888 Views 166KB Size Report
From Database-only Technologies to the Unavoidable. Addition of AI ... The first is the PDG system [5]: a benchmarking software that evaluates ..... Adoption of Benchmarking and Best Practices in Manufacturing SMEs”, Small Business.
Two Expert Diagnosis Systems for SMEs: From Database-only Technologies to the Unavoidable Addition of AI Techniques Sylvain Delisle1 & Josée St-Pierre2 Institut de recherche sur les PME Laboratoire de recherche sur la performance des entreprises Université du Québec à Trois Rivières 1: Département de mathématiques et d’informatique 2: Département des sciences de la gestion C.P. 500, Trois-Rivières, Québec, Canada, G9A 5H7 Phone: 1-819-376-5011 + 3832 Fax: 1-819-376-5185 Email: {sylvain_delisle, josee_st-pierre}@uqtr.ca Web: www.uqtr.ca/{~delisle, dsge}

Abstract. In this application-oriented paper, we describe two expert diagnosis systems we have developed for SMEs. Both systems are fully implemented and operational, and both have been put to use on data from actual SMEs. Although both systems are packed with knowledge and expertise on SMEs, neither has been implemented with AI techniques. We explain why and how both systems relate to knowledge-based and expert systems. We also identify aspects of both systems that will benefit from the addition of AI techniques in future developments.

1. Expertise for Small and Medium-sized Enterprises (SMEs) The work we describe here takes place within the context of the Research Institute for SMEs—the Institute’s core mission (www.uqtr.ca/inrpme/anglais/index.html) is to support fundamental and applied research to foster the advancement of knowledge on SMEs to contribute to their development. The specific lab in which we have conducted the research projects we refer to in this paper is the LaRePE (LAboratoire de REcherche sur la Performance des Entreprises: www.uqtr.ca/inrpme/larepe/). This lab is mainly concerned with the development of scientific expertise on the study and modeling of SMEs’ performance, including a variety of interrelated subjects such as finance, management, information systems, production, technology, etc. The vast majority of research projects carried out at the LaRePE involves both theoretical and practical aspects, often necessitating in-field studies with SMEs. As a result, our research projects always attempt to provide practical solutions to real problems confronting SMEs. In this application-oriented paper we briefly describe two expert diagnosis systems we have developed for SMEs. Both can be considered as decision support systems—

see [15] and [18]. The first is the PDG system [5]: a benchmarking software that evaluates production and management activities, and the results of these activities in terms of productivity, profitability, vulnerability and efficiency. The second is the eRisC system [6]: a software that helps identify, measure and manage the main risk factors that could compromise the success of SME development projects. Both systems are fully implemented and operational. Moreover, both have been put to use on data from actual SMEs. What is of particular interest here, especially from a knowledge-based systems perspective, is the fact that although both the PDG and the eRisC systems are packed with knowledge and expertise on SMEs, neither has been implemented with Artificial Intelligence (AI) techniques. However, if one looks at them without paying attention to how they have been implemented, they qualify as “black-box” diagnostic expert systems. In the following sections, we provide further details on both systems and how they relate to knowledge-based and expert systems. We also identify aspects of both systems that could benefit from the addition of AI techniques in future developments.

2. The PDG System: SME Performance Diagnostic 2.1 An Overview of the PDG System The PDG system evaluates a SME from an external perspective and on a comparative basis in order to produce a diagnosis of its performance and potential, complemented with relevant recommendations. Although we usually refer to the PDG system as a diagnostic system it is in fact a hybrid diagnostic-recommendation system as it not only identifies the evaluated SME’s weaknesses but it also makes suggestions on how to address these weaknesses in order to improve the SME’s performance. An extensive questionnaire is used to collect relevant information items on the SME to be evaluated. Data extracted from the questionnaire is computerized and fed into the PDG system. The latter performs an evaluation in approximately 3 minutes by contrasting the particular SME with an appropriate group of SMEs for which we have already collected relevant data. The PDG’s output is a detailed report in which 28 management practices (concerning human resources management, production systems and organization, market development activities, accounting, finance and control tools), 20 results indicators and 22 general information items are evaluated, leading to 14 recommendations on short term actions the evaluated SME could undertake to improve its overall performance. As shown in Figure 1 (next page), the PDG expert diagnosis system is connected to an Oracle database which collects all the relevant data for benchmarking purposes— the PDG also uses the SAS statistics package, plus Microsoft Excel for various calculations and the generation of the final output report. The PDG reports are constantly monitored by a team of multidisciplinary human experts in order to ensure that recommendations are valuable for the entrepreneurs. This validation phase, which always takes place before the report is sent to the SME, is an occasion to make further improvements to the PDG system, whenever appropriate. It is also a valuable means

for the human experts to update their own expertise on SMEs. Figure 1 also shows that an intermediary partner is part of the process in order to guarantee confidentiality: nobody in our lab knows to what companies the data are associated.

data and results

Oracle Database

report

report

Lab questionnaire financial data

questionnaire, financial data

report

Entrepreneur (SME)

Intermediary Partner

PDG Expert Diagnosis System

information

expertise

Multidisciplinary Team of Human Experts

Fig. 1. The PDG system: evaluation of SMEs, from an external perspective and on a comparative basis, in order to produce a diagnosis of their performance and potential

The current version of the PDG system has been in production for 2 years. So far, we have produced more than 600 reports and accumulated in the database the evaluation results of approximately 400 different manufacturing SMEs. A recent study was made of 307 Canadian manufacturing SMEs that have used the PDG report, including 49 that have done so more than once. Our results show that the PDG’s expert benchmarking evaluation allows these organisations to improve their operational performance, confirming the usefulness of benchmarking but also, the value of the recommendations included in the PDG report concerning short term actions to improve management practices [17]. 2.2 Some Details on the PDG System The PDG’s expertise is located in two main components: the questionnaire and the benchmarking results interpretation module—in terms of implementation, the PDG uses an Oracle database, the SAS statistical package, and Microsoft Excel. The first version of the questionnaire was developed by a multidisciplinary team of researchers in the following domains: business strategy, human resources, information systems, industrial engineering, logistics, marketing, economics, and finance. The questionnaire development team was faced with two important challenges that quickly became crucial goals: 1) find a common language (a shared ontology) that would allow researchers to understand each other and, at the same time, would be accessible to entrepreneurs when answering the questionnaire, and 2) identify long-term performance

indicators for SMEs, as well as problem indicators, while keeping contents to a minimum since in-depth evaluation was not adequate. The team was able to meet these two goals by the assignment of a “knowledge integrator” role to the project leader. During the 15-month period of its development, the questionnaire was tested with entrepreneurs in order to ensure that it was easy to understand both in terms of a) contents and question formulation, and b) report layout and information visualization. All texts were written with a clear pedagogical emphasis since the subject matter was not all that trivial and the intended lectureship was quite varied and heterogeneous. Several prototypes were presented to entrepreneurs and they showed a marked interest for graphics and colours. Below, Figure 2 shows a typical page of the 10-page report produced by the PDG system.

Fig. 2. An excerpt from a typical report produced by the PDG system. The evaluated SME’s performance is benchmarked against that of a reference group.

The researchers’ expertise was precious in the identification of vital information that would allow the PDG system to rapidly produce a general diagnosis of any manufacturing SME. The diagnosis also needed to be reliable and complete, while being comprehensible by typical entrepreneurs as we pointed out before. This was pioneering research work that the whole team was conducting. Indeed, other SME diagnosis systems are generally financial and based on valid quantitative data. The knowledge integrator mentioned above played an important part in this information engineering and integration process. Each expert had to identify practices, systems, or tools that had to be implemented in a manufacturing SME to ensure a certain level of performance. Then, performance indicators had to be defined in order to measure to what extent these individual practices, systems, or tools were correctly implemented and allowed the enterprise to meet specific goals—the relationship between practices and results is a distinguishing characteristic of the PDG system. Next, every selected performance indicator was assigned a relative weight by the expert and the knowledge integrator. This weight is used to position the enterprise being diagnosed with regard to its reference group, thus allowing the production of relevant comments and recommendations. The weight is also used to produce a global evaluation that will be displayed in a synoptic table. Contrary to many performance diagnostic tools in which the enterprise’s information is compared to norms and standards (e.g. [11]), the PDG system evaluates an enterprise relative to a reference group selected by the entrepreneur. Research conducted at our institute seriously questions this use of norms and standards: it appears to be dubious for SMEs as they simply are too heterogeneous to support the definition of reliable norms and standards. Performance indicators are implemented as variables in the PDG system—more precisely in its database, and in the benchmarking results interpretation module (within the report production module). These variables are defined in terms of three categories: 1) binary variables, which are associated with yes/no questions; 2) scale variables, which are associated with the relative ranking of the enterprise along a 1 to 4 or a 1 to 5 scale, depending on the question; and 3) continuous (numerical) variables, which are associated with numerical figures such as the export rate or the training budget. Since variables come in different types, they must also be processed differently at the statistical level, notably when computing the reference group used for benchmarking purposes. In order to characterize the reference group with a single value, a central tendency measure that is representative of the reference group’s set of observations is used. Depending on the variable category and its statistical distribution, means, medians, or percentages are used in the benchmarking computations. Table 1 (next page) shows an example of how the evaluated enterprise’s results are ranked and associated with codes that will next be used to produce the various graphics in the benchmarking report. The resulting codes (see CODE in Table 1) indicate the evaluated enterprise’s benchmarking result for every performance indicator. They are then used by the report generation module to produce the benchmarking output report, which contains many graphical representations, as well as comments and recommendations. The codes are used to assign colours to the enterprise, while the reference group is always associated with the beige colour. For instance, if the enterprise performs better than its reference group, CODE = 4 means colour is green forest. However, in the opposite situation, CODE = 4 would mean colour is red. Other colours with other meanings are

yellow, salmon, and green olive. Figure 2 above illustrates how these coloured graphics look like (although they appear only in black and white here). Scale variable

Binary variable

Continuous (numerical) variable

if SME >= (1.25 x MEA), then CODE = 4 if SME >= (1.10 x MEA), then CODE = 3 if SME >= (1.00 x MEA), then CODE = 2 if SME >= (0.90 x MEA), then CODE = 1 if SME >= (0.75 x MEA), then CODE = 0

if SME = 1 and 10% of RG = 1 then CODE = 4 if SME = 1 and 25% of RG = 1 then CODE = 3 if SME = 1 and 50% of RG = 1 then CODE = 2 if SME = 1 and 75% of RG = 1 then CODE = 1 if SME = 1 and 90% of RG = 1 then CODE = 0

if SME >= (1.25 x MED), then CODE = 4 if SME >= (1.10 x MED), then CODE = 3 if SME >= (1.00 x MED), then CODE = 2 if SME >= (0.90 x MED), then CODE = 1 if SME >= (0.75 x MED), then CODE = 0

Example: participative management

Example: remuneration plan

Example: fabrication cost

Table 1. Some aspects of the representation of expertise within the PDG system with performance indicators implemented as variables. This table shows three (3) variables: one scale variable (participative management), one binary (remuneration plan), and one continuous numerical (fabrication cost). Legend: SME = variable value for the evaluated enterprise; MEA = mean value of the variable in the reference group; RG = reference group; MED = median value of the variable in the reference group; CODE = resulting code for the evaluated enterprise.

3. The eRisC System: Risk Assessment of SME Development Projects 3.1 An Overview of the eRisC System SMEs often experience difficulties accessing financing to support their activities in general, and their R&D and innovation activities in particular—see [1], [4], [8], and [9]. Establishing the risk levels of innovation activities can be quite complex and there is no formalized tool to help financial analysts assess them and correctly implement compensation and financing terms that will satisfy both lenders and entrepreneurs. This situation creates a lot of pressure on the cash resources of innovating SMEs. Based on our team’s experience with SMEs and expertise with the assessment of risk, and thanks to the contribution of several experts that constantly deal with SMEs development projects, we have developed a state-of-the-art Web-based software called eRisC (see Figure 3 next page). The eRisC (https://oraprdnt.uqtr.uquebec.ca/erisc/index.jsp) expert diagnosis system identifies, measures and allows to manage the main risk factors that could compromise the success of SMEs development projects including expansion, export and innovation projects, each of which is the object of a separate section of the software. An extensive dynamic, Web-based questionnaire is used to collect relevant information items on the SME expansion project to be evaluated.

Entrepreneur (SME) or other agent

data

Oracle Database

report

data and results

report

Internet data

eRisC Expert Diagnosis System

information

expertise

Multidisciplinary Team of Human Experts

Fig. 3. The eRisC system: a Web-based software that helps identify, measure and manage the main risk factors involved in SME development projects

The contents of the questionnaire are based on an extensive review of literature in which we identified over 200 risk factors acting upon the success of SMEs development projects. For example, factors associated with the export activity are export experience, commitment/planning, target market, product, distribution channel, shipping and contractual/financial aspects. These seven elements are broken down into 21 subelements involving between 58 and 216 questions—the number of questions ranges from 59 to 93 for an expansion project, from 58 to 149 for an export project, and from 86 to 216 for an innovation project. Data extracted from the questionnaire is fed into an elaborated knowledge-intensive algorithm that computes risk levels and identifies main risk elements associated with the evaluated project. As shown in Figure 3, the eRisC expert diagnosis system is connected to an Oracle database which collects all the relevant data. Since eRisC was developed after the PDG system, it benefited from the most recent Web-based technologies (e.g. Oracle Java) and was right from the start designed as a fully automated system. More precisely, contrary to the PDG reports, there is no need to constantly monitor eRisC’s output reports—thus the dotted arrows on the right-hand side of Figure 3 above. eRisC was developed for and validated by entrepreneurs, economic agents, lenders and investors, to identify the main risk factors of SMEs development projects in order to improve their success rates and facilitate their financing. As of now, various organizations are starting to put eRisC at use in real life situations, allowing us to collect precious information in eRisC’s database on SMEs projects and their associated risk assessment. We have a group of 30 users, from various organizations and domains, who currently use eRisC for real-life projects and who provide us with useful feedback for marketing purposes.

3.2 Some Details on the eRisC System eRisC’s contents was developed by combining various sources of information, knowledge and expertise: the literature on business failure factors and the one on project management, our colleagues’ expertise on SMEs, and invaluable information from various agents dealing with these issues on a day-to-day basis, such as lenders, investors, entrepreneurs, economic advisors and experts. Based on this precious and abundant information, we first assembled a long list of potential risk factors that could disturb or influence significantly the development of SMEs projects. In a second phase, we had to reduce the original list of risk factors which was simply too long to be considered in its entirety in real-life practical situations. In order to do that, we considered the relative importance and influence of risk factors on the failure of evaluated projects. Once this pruning was completed, and after we ensured that we had not discarded important factors, the remaining key factors were grouped into meaningful generic categories. We then developed sets of questions and subquestions that would support the measurement of the actual risk level of a project. This also allowed us to add a risk management dimension to our tool by inviting the user to identify with greater precision facets that could compromise the success of the project, and thus allowing a better control with the implementation of appropriate corrective measures. A relatively complex weight system was also developed in order to associate a quantitative measure to individual risk elements, to rank these elements, and to compute a global risk rating for the evaluated project—see Figure 4 below.

Fig. 4. An excerpt from the expansion project questionnaire. The only acceptable answers to questions are YES, NO, NOT APPLICABLE, DON’T KNOW.

In a third and final phase, the contents of eRisC was validated with many potential users and their feedback was taken into consideration to make adjustments on several aspects such as question formulation, term definition, confidentiality of information, etc. At this point, the tool was still “on paper”, as an extensive questionnaire (grid), and had not been implemented yet. So an important design decision had to be made at the very beginning of the implementation phase: how to convert the on-paper, large, and static questionnaire into an adequate form to be implemented into the eRisC software? As we examined various possibilities, we gradually came to look at it more and more as an interactive and dynamic document. In this dynamic perspective, the questionnaire would be adaptable to the users’ needs for the specific project at hand. In a sense, the questionnaire is at the meeting point of three complementary dimensions: the risk evaluation model as defined by domain experts, the user’s perspective as a domain practitioner; and the computerized rendering of the previous two dimensions. Moreover, from a down-to-earth, practical viewpoint, users would only be interested in the resulting software if it proved to be quick, user friendly, and better than their current non-automated tool.

Fig. 5. Risk Assessment Results Produced by eRisC.

With regard to the technological architecture, eRisC is based on the standard 3tiered Web architecture for which we selected Microsoft’s Internet Explorer (Web browser) for the client side, the Tomcat Web server for the middleware, and, for the data server, the Oracle database server (Oracle Internet Application Server 8.1.7 Enterprise Edition) running on a Unix platform available at our University in a secured environment. All programming was done with JSP (JaveServer Page) and JavaScript.

A great advantage of the 3-tiered model is that it supports dynamic Web applications in which the contents of Web pages to be shown on the user’s (client’s) Web browser is computed “on the fly”, i.e. dynamically, from the Web server and the information it fetched from the database server in response to the user’s (client’s) request. The five (5) main steps of processing involved in a project risk evaluation with eRisC are: 1) dynamic creation of the questionnaire, according to the initial options selected by the user; 2) project evaluation (questions answering: see Figure 4) by the user; 3) saving of data (user’s answers) to the database; 4) computation of results; and 5) presentation of results in an online and printable report. Once phases 1 to 3 are completed, after some 30 minutes on average, eRisC only takes a minute or so to produce the final results, all this taking place online. Final results include a numerical value representing the risk rating (a relative evaluation between 0 and 100) for the specific SME project just evaluated, combined with the identification of at least the five most important risk factors (to optionally perform risk mitigation) within the questionnaire’s sections used to perform the evaluation, plus a graphical (pie) representation showing the risk associated with every section and their respective weight in the computation of the global project risk rating—see Figure 5 on the previous page and Figure 6 below. The user can change these weights to adjust the evaluation according to the project’s characteristics, or to better reflect her/his personal view on risk evaluation. These “personal” weights can also be saved by eRisC in the user’s account so that the software can reuse them the next time around.

Fig. 6. Mitigation Report and Risk Assessment Simulation in eRisC.

When sufficient data will have been accumulated in eRisC’s database, it will be possible to establish statistically-based weight models for every type of user. Amongst various possibilities, this will allow entrepreneurs to evaluate their projects with weights used by bankers, allowing them to better understand the bankers’ viewpoint when entrepreneurs ask for financing assistance. Finally, mitigation elements are associated with many risk factors listed in eRisC’s output report. Typically associated with the most important risk factors, these mitigation elements suggest ways to reduce the risk rating just computed. The user can even re-compute the risk level with the hypothesis that the selected mitigation elements have been put in place in order to assess the impact they may have on the project’s global risk level. A new graphic will then be produced showing a comparison of the risk levels before and after the mitigation process.

4. Conclusion: AI-less Intelligent Decision Support Systems A good deal of multi-domain expertise and informal knowledge engineering was invested into the design of the PDG and the eRisC expert diagnosis systems. In fact, at the early stage of the PDG project, which was developed before eRisC, it was even hoped that an expert-system approach would apply naturally to the task we were facing. Using an expert system shell, a prototype expert system was in fact developed for a subset of the PDG system dealing only with human resources. However, reality turned out to be much more difficult than anticipated. In particular, the knowledge acquisition, knowledge modelling, and knowledge validation/verification phases ([7], [12], [11], [16], [3]) were too demanding in the context of our resources constraints especially in the context of a multidisciplinary domain such as that of SME for which little formalized knowledge exists. Indeed, many people were involved, all of them in various specialization fields (i.e. management, marketing, accounting, finance, human resources, engineering, technical, information technology, etc.) and with various backgrounds (researchers, graduate students, research professionals and, of course, entrepreneurs). One of the main difficulties that hindered the development of the PDG as an expert system was the continuous change both the questionnaire and the benchmarking report were undergoing during the first three years of the project. So at the same time the research team was trying to develop a multidisciplinary model of SME performance evaluation, users’ needs had to be considered, software development had to be carried out, and evaluation reports had to be produced for participating SMEs. This turned out to be a rather complicated situation. The prototype expert system mentioned above was developed in parallel with the current version, although only for the subset dealing with human resources—see [10] and [19] for examples of expert systems in finance. The project leader’s knowledge engineer role was very difficult since several experts from different domains were involved and the extraction and fusion of these various fields of expertise had never been done before. Despite the experts’ valuable experience, knowledge, and good will, they had never been part of a similar project before. The modelling of such rich, complex, and vast information, especially for SMEs, was an entirely new challenge both scientifically and technically. Indeed,

because of their heterogeneous nature, and contrary to large enterprises, SMEs are much more difficult to model and evaluate. For instance, the implementation of certain management practices may be necessary and usual for traditional manufacturing enterprises, but completely inappropriate for a small enterprise subcontracting for a large company or a prime contractor. These important considerations and difficulties, not mentioning the consequences they had on the project’s schedule and budget, lead to the abandon of the expert system after the development of a simple prototype. As to the eRisC system, since it was another multi-domain multi-expert project, and thanks to our prior experience with the PDG system, it was quickly decided to stay away from AI-related approaches and techniques. During the development of eRisC’s questionnaires, we realized how risk experts always tended to model risk assessment from their own perspective and from their own personal knowledge, as reported in the literature. This is why we built our risk assessment model from many sources, thanks to a comprehensive review of literature and the availability of several experts, in order to ensure we ended up with an exhaustive list of risk determining factors for SME projects. For instance, here are the main different perspectives (see e.g. [14]). − Bankers and lenders care mostly about financial aspects and tend to neglect qualitative dimensions that indicate whether the enterprise can solve problems and meet challenges in risky projects. − Entrepreneurs do not realize that their implication in the project can in fact constitute a major risk from their partners’ viewpoint. − Economic consultants and advisors have a specialized background that may prevent them from having a global perspective on the project. Obviously, it is the fusion of all these diverse and complementary expertise sources that would have been used to develop the knowledge base of an expert-system version of the current eRisC system. However, this was simply impossible given the timetable and resources available to us. Of course, this does not mean that AI tools were inappropriate for those two projects. As a research team involved in an applied project, we made a rational decision based on our experience on a smaller scale experiment (i.e. the PDG prototype expert system on human resources), on our time and budget constraints, and on the welldocumented fact that multi-domain multi-expert knowledge acquisition and modelling constitutes a great challenge. Yet another factor that had great influence on our design decisions was the fact that both projects started out on paper as questionnaires which led naturally to database building and use of all the database-related software development. Thus, both the PDG and eRisC systems ended up as knowledge-packed systems built on database technology. However, as we briefly discuss in Section 5 below, we are now at a stage where we plan the addition of AI-related techniques and tools. The current versions of the PDG and eRisC systems, although not implemented with AI techniques, e.g. knowledge base of rules and facts, inference engine, etc. (see, e.g., [13], [18]), qualifies as “black-box” expert diagnosis systems. These unique systems are based on knowledge, information and algorithms that allow them to produce outputs that only a human expert, or in fact several human experts in different domains, would be able to produce in terms of diagnosis and recommendation quality. These reports contain mostly coloured diagrams and simple explanations that are formulated in plain English (or French) so that SMEs entrepreneurs can easily under-

stand them. The PDG is the only system that can be said to use some relatively old AI techniques. Indeed, the comments produced in the output report are generated via a template-based approach, an early technique used in natural language processing.

5. Future Work: Bringing Back AI Techniques into the Picture The PDG and the eRisC systems are now at a stage where we can now reconsider the introduction of AI techniques in new developments. The main justification for this is the need to eliminate human intervention while preserving high quality outputs, based on rare highly-skilled knowledge and expertise. We have started to develop new modules that will increase even more the intelligence features of both systems. Here is a short, non-exhaustive list accompanied with brief explanations: − Development of data warehouses and data mining algorithms to facilitate statistical processing of data and extend knowledge extraction capabilities. Such extracted knowledge will be useful to improve the systems’ meta-knowledge level, which could be used in the systems’ explanations for instance, and also to broaden human experts’ domain knowledge. This phase is already in progress. − The huge number of database attributes and statistical variables manipulated in both systems is overwhelming. A conceptual taxonomy, coupled with an elaborated data dictionary, has now become a necessary addition. For instance, the researcher should be able to find out quickly to what concepts a particular attribute (or variable) is associated, to what computations or results it is related, and so on. This phase has recently begun. − Development of an expert system to eliminate the need for any human intervention in the PDG system. Currently, a human expert must revise all reports before they are sent to the SME. Most of the time, only minor adjustments are required. The knowledge used to perform this final revision takes into consideration individual results produced in various parts of the benchmarking report and analyze potential consequences of interrelationships between them in order to ensure that conclusions and recommendations of the evaluated SME are both valid and coherent. This phase is part of our future work. − Augment current systems with case-based reasoning and related machine learning algorithms. In several aspects of both systems, evaluation of the problem at hand could be facilitated if it were possible to establish relationships with similar problems (cases) already solved before. Determining the problems’ salient features to support this approach would also offer good potential to lessen the users’ burden during the initial data collection phase. This phase is part of our future work. − Study the potential of agent technology to reengineer some elements of both systems, especially from a decision support system perspective [2]. This could be especially interesting for the modelling and implementation of distributed sources of expertise that contribute to decision processing. For example, in the PDG system each source of expertise in the performance evaluation of a SME could be associated with a distinct agent controlling and managing its own knowledge base. Interaction and coordination between these agents would be crucial aspects of a PDG system based on a community of cooperative problem-solving agents.

References [1] Beaudoin R. and J. St-Pierre (1999). “Le financement de l’innovation chez les PME”, Working paper for Développement Économique Canada, 39 pages. Available: http://www.DEC-CED.gc.ca/fr/2-1.htm [2] Bui T. and J. Lee (1999). “An Agent-Based Framework for Building Decision Support Systems”, Decision Support Systems, 25, 225-237. [3] Caulier P. and B. Houriez (2001). “L’approche mixte expérimentée (modélisation des connaissances métiers)”, L’Informatique Professionnelle, 32 (195), juin-juillet, 30-37. [4] Chapman R.L., C.E. O’Mara, S. Ronchi and M. Corso (2001). “Continuous Product Innovation : A Comparison of Key Elements across Different Contingency Sets”, Measuring Business Excellence, 5(3), 16-23. [5] Delisle S. and J. St-Pierre (2003). “An Expert Diagnosis System for the Benchmarking of SMEs’ Performance”, First International Conference on Performance Measures, Benchmarking and Best Practices in the New Economy (Business Excellence '03), Guimaraes (Portugal), 10-13 June 2003, to appear. [6] Delisle S and J. St-Pierre (2003). “SME Projects: A Software for the Identification, Assessment and Management of Risks”, 48th World Conference of the International Council for Small Business (ICSB-2003), Belfast (Ireland), 15-18 June 2003, to appear. [7] Fensel D. and F. Van Harmelen (1994). “A Comparison of Languages which Operationalize and Formalize KADS Models of Expertise”, The Knowledge Engineering Review, 9(2), 105146. [8] Freel M.S. (2000). “Barriers to Product Innovation in Small Manufacturing Firms”, International Small Business Journal, 18(2), 60-80. [9] Menkveld A.J. and A.R. Thurik (1999). “Firm Size and Efficiency in Innovation: Reply”, Small Business Economics, 12, 97-101. [10] Nedovic L. and V. Devedzic (2002), “Expert Systems in Finance—A Cross-Section of the Field”, Expert Systems with Applications, 23, 49-66. [11] Matsatsinis N.F., M. Doumpos and C. Zopounidis (1997). “Knowledge Acquisition and Representation for Expert Systems in the Field of Financial Analysis”, Experts Systems with Applications, 12(2), 247-262. [12] Rouge A., J.Y. Lapicque, F. Brossier and Y. Lozinguez (1995). “Validation and Verification of KADS Data and Domain Knowledge”, Experts Systems with Applications, 8(3), 333341. [13] Santos J., Z. Vale and C. Ramos (2002). “On the Verification of an Expert System: Practical Issues”, Lecture Notes in Artificial Intelligence #2358, 414-424. [14] Sarasvathy, D.K., H.A. Simon and L. Lave (1998). “Perceiving and Managing Business Risks : Differences Between Entrepreneurs and Bankers”, Journal of Economic Behavior and Organization, 33, 207-225. [15] Shim J.P., M. Warkentin, J.F. Courtney, D.J. Power, R. Sharda and C. Carlsson (2002). “Past, Present, and Future of Decision Support Technology”, Decision Support Systems, 33, 111-126. [16] Sierra-Alonso A. (2000). “Definition of a General Conceptualization Method for the Expert Knowledge”, Lecture Notes in Artificial Intelligence #1793, 458-469. [17] St-Pierre J., L. Raymond and E. Andriambeloson (2002). “Performance Effects of the Adoption of Benchmarking and Best Practices in Manufacturing SMEs”, Small Business and Enterprise Development Conference, The University of Nottingham. [18] Turban E. and J.E. Aronson (2001). Decision Support Systems and Intelligent Systems, Prentice Hall. [19] Wagner W.P., J. Otto and Q.B. Chung (2002). “Knowledge Acquisition for Expert Systems in Accounting and Financial Problem Solving”, Knowledge-Based Systems, 15, 439447.