Ontology Engineering, Scientific Method, and the ... - CiteSeerX

1 downloads 0 Views 197KB Size Report
ences, and of Information Systems and design science research. We outline ..... 1936). In ontological analysis, walkthrough methods that employ mental or paper.
Ontology Engineering, Scientific Method, and the Research Agenda Hans Akkermans12 and Jaap Gordijn1 1 Free University Amsterdam VUA Faculty of Sciences FEW, Business Informatics Amsterdam, The Netherlands [email protected] 2

AKMC Knowledge Management BV Koedijk, The Netherlands [email protected]

Abstract. The call for a “focus on content” in ontology research by Nicola Guarino and Mark Musen in their launching statement of the journal Applied Ontology has quite some implications and ramifications. We reflectively discuss ontology engineering as a scientific discipline, and we put this into the wider perspective of debates in other fields, including the methodology of social and natural sciences, and of Information Systems and design science research. We outline how ontologies provide us with a (new) scientific method for theory formation. This positioning allows for stronger concepts and techniques for theoretical, empirical and practical validation that in our view are now needed in the field. A prerequisite for this is an emphasis on ontology as a (domain) content oriented concept, rather than as primarily a computer representation notion. Taking application domain theories and the associated content reference of ontologies really seriously as first-class citizens will actually increase the contribution of ontology engineering to the development of scientific method in general. Next, ontologies should develop from the current static representations of relatively stable domain content into actionable theories-in-use, and a possible way forward is to build in capabilities for self-organization of ontologies as service-oriented knowledge utilities (SOKUs) that can be delivered over the Web.

Keywords: ontology, theory formation, methodology, validation, web intelligence

1 Introduction: Focus on Content? Many believe that ontologies are first of all a computer science (CS) construct. There is some truth in this if one takes as a measure where the main locus and focus is in ontology engineering activities. On the other hand, there is a significant outside interest in ontologies for (mostly non-CS) reasons that mainly relate to the development and growth of these respective domains. Many in computer science, and in knowledge engineering (KE) as well, have a tendency to see this as ‘application’: something to be happy with because it proves the relevance of ontology engineering, but at the same

time something that is also of lesser (scientific) importance than the core CS issues in ontology such as representation, languages, reasoning techniques, and tools. In this reflective essay we argue that this is perhaps an understandable attitude, but it is also a self-limiting position that in the end will not be able to exploit the full potential of the ontology idea. ‘Application’ domains are to be taken much more seriously, as first-class citizens in CS and KE. Offen [1] points out that “domain understanding is the key to successful system development”. And in their launching statement of the new journal Applied Ontology, Nicola Guarino and Mark Musen [2] call for a “focus on content” in ontology research. In this position paper we explore some of the consequences, and we come to the seemingly paradoxical conclusion that taking (real-world) applications and (also non-CS) application domains really seriously, will improve and widen ontology engineering as scientific method in general.

2 Ontology as Scientific Method for Theory Formation Ontologies are generally defined as explicit and formal specifications of a shared conceptualization for some domain of interest [3]. The term shared refers to an agreement within a community of interest or practice over the description (i.e. conceptualization) of the domain, while formal indicates that the representation of this agreement is in some sort of computer-understandable format. Note the rather open notion of a domain conceptualization in the definition: ontology research makes no claim about the nature of the knowledge to be modelled [4].

RealWorld Domain

Computer Information System Computational Specification for

Theoretical Model of

Ontology Fig. 1. The conceptualization triangle and the dual reference of ontologies.

2.1 The Dual Reference of Ontologies Thus, as depicted in Figure 1, ontologies have a dual reference. They are not only ‘CS’ specs referencing a computational implementation (like conventional information systems (IS) or database models or schemas), but they have an explicit real-world content

or substantive reference as well [5, 6, 2, 7]. In current CS and KE research the computational reference seems to get more of the attention. However, we believe that the domain content reference is at least equally important, and not just because this is where future massive application of ontologies will be. Offen [1] makes the important general comment that “domain understanding is the key to successful system development”. In our view, this is where the real value of ontologies outside the CS domain lies. We note that developing this understanding — which is a key practical use of today’s ontology building [4] — is in fact a (real-world domain) conceptualization and theory formation act. The content or substantive reference of ontologies means that ontologies function as ways to build and test what are in fact theories that purport to adequately model an, often empirical, domain or phenomenon. In other words, ontology engineering can be viewed and employed as a method for theory formation. And it is one that appears to be useful for a wide variety of domains and disciplines. Many works in the philosophy of science (e.g. [8–11]) and scientific research methodology (e.g. [12–15]) emphasize the key importance, and difficulty, of conceptualization and theory formation in scientific research. We believe that the role of ontology as an instrument for conceptualization and theory formation in this sense is currently often overlooked. 2.2 The Importance of Domain Understanding and Theory This way of looking at ontologies is what actually makes ontologies interesting and useful for experts outside the CS domain: they offer ways to model phenomena of interest and, in addition, ways for testing such models by means of simulation, calculation, or other computational means. But even more importantly, ontology engineering — as an advanced branch of conceptual modelling — is providing richer and more flexible ways for conceptualization and theory formation than currently in use in many domains and scientific disciplines. Scientific theory generally seems to assume two extreme forms: either formalmathematical (as typically encountered in the ‘exact’ natural sciences), or informal in natural language and essayistic (common in social sciences and the humanities). In contrast, IS-style conceptual modelling and particularly ontology engineering have over the years developed novel methods for conceptualization that are more formal and rigorous than theories in natural language, thus allowing for stronger, computational and other forms of validation for example by CASE and simulation tools. At the same time, the graphical and diagram representations commonly developed and employed in conceptual modelling and ontology engineering make the associated theories much more understandable and accessible to experts and practitioners in other domains compared to formal math and logic. Conceptual and ontological analysis, graphical diagramming methods, and their combination with formal computational reasoning techniques have over the years been elaborated into a fine art at a level of sophistication not found elsewhere. Accordingly, looking at ontologies from their content reference point of view suggests that ontology engineering can be usefully interpreted as a new scientific method for conceptualization and theory formation, and one that brings several novel scientific method contributions.

2.3 Ontologies as (Validatable) Middle-Range Theories Conceiving the content/domain reference of ontologies as ‘the application view’ is quite natural for CS researchers, but overly limiting. Domain experts, on their turn, often see it equally simply the other way around: the CS work as the application side (or even worse: as ‘just programming’). Ontologies as domain theories (that are to be sharable and reusable) go significantly beyond the application view. This holds for the top-level ontologies (concerning, say, mereology, taxonomy, space, time, etc.) that formalize very generic concepts in use in and across many different domains. But it holds much more widely. What in CS and KE are called application domains are in fact themselves broad areas where ontologies can be (made) sharable and reusable across many different applications. Here, ontologies have the capability to express what in social science research are called “middle-range theories”: theories that have a much wider applicability than the situations, contexts, or cases from which they actually originate (for an interesting methodological discussion how to develop middle-range theories in scientific research, see the book of one of the grand old men of sociology, Howard Becker [12]). This is consistent with our own experience in over 15 years of ontology and intelligent systems building and applications in different domains (e.g. [6, 16, 5, 17]), including networked business models, physical systems simulation and design, Internet music and radio, power networks and energy management, and health care electronic services. Ontology engineering as a theory formation method is admittedly still in a relatively young stage. It widens the potential scope and importance of ontology engineering as scientific method of general interest, but it also comes with additional scientific issues and duties. First, as already pointed out, there are very different notions in science what a theory is, ought to be, or looks like. Ontology engineering as a multidisciplinary approach to theory formation thus has to deal with very different conceptions and styles of scientific research. Furthermore, scientific theories are supposed to be empirically and/or pragmatically valid in their domain of reference. In terms of the dual reference of Figure 1, ontologies are to be both computationally and epistemologically adequate. CS tends to focus on methods for the former, and indeed here it has a lot of added value for scientists and practitioners in other domains. But this is not all the evaluation and validation that has to be done. Ontologies are good only insofar as they are empirically valid from the domain theory point of view. This requires more and different validation activities than just computer-oriented ones. Computer implementation and test of ontologies is necessary, but its strength is in forms of consistency and validity that are internal to the theory that is tested. It is restricted to what in philosophical terms is called a coherence conception of truth. Richer and stronger notions of validation, such as external validity, are in need of more emphasis in the ontology research field. Accordingly, we will discuss below (i) different images of science with varying views on theory and its validation, and (ii) a broader categorization of the concept of validity. Both have consequences for how we view and carry out ontology engineering.

3 Images of Science There exist in research very different images of science. Each has different underlying assumptions as well as consequences for ontology research. We will therefore here sketch some of these different images of scientific research as practiced (or at least preached) in various disciplines: one more characteristic for the natural sciences, a second one representing established schools of thought in social research, and a third more open but also less defined one that is often more typical for relatively young and/or interdisciplinary fields. We start with the latter. 3.1 Inclusive Models of Scientific Research A first image of science is a rather broad-church, inclusive position concerning what is to be accepted as scientific research. Such a position is for example argued for in social research by Robson [13], in philosophy by Feyerabend [11] as well as Toulmin [18]. (This does not necessarily imply agreement on other issues. Feyerabend wrote a book with the provocative title Farewell to Reason (1987), while Toulmin’s 2001 book is entitled Return to Reason). But we also encounter it in a very recent discussion on the classification of Requirements Engineering research by Wieringa et al. [19]. As a shared background we single out two general elements: (i) scientific research is fundamentally concerned with knowledge claims; (ii) it is important that these claims are evaluated and validated in some way. The importance and indeed necessity of both elements is quite explicitly indicated; but both what these knowledge claims may refer to and how they may be justified in terms of method is circumscribed in a broad way. The present authors are personally in favour of such a broad and inclusive position with regard to scientific research. Having worked in several different disciplines, however, we think it is useful to uncover and reflect upon the underlying assumptions, for two reasons. First, ontology research and related fields such as Requirements Engineering and Information Systems often work at the borderlines of or in collaboration with several different scientific disciplines and fields of professional practice (more specifically, inside as well as outside of CS). It is a common experience that other disciplines have specific (possibly narrower) views on what science is. Cross-boundary research activities have to be able to understand these views and relate to them. Second, different (seemingly ‘philosophical’) views on what science is or is not have important operational consequences for research as it is actually organized, carried out, and reviewed. Here, science is no different from business or industry: general strategic considerations (‘this is the kind of business we are or want to be in’) do influence the nature, style and management of actual operations. A good working definition of this broad and inclusive view on scientific research we propose (and actually use in our courses on research methodology) is: scientific research is a social practice concerned with the production of claims to knowledge through a process of inquiry in a way that is: 1. Relevant: produced knowledge claims are to contribute and have specified added value to the current state of knowledge.

2. Systematic: the process of inquiry that leads to the production of knowledge claims is carried out in a systematic, critical, and rigorous fashion. 3. Transparent: knowledge claims are produced and argued for in such a way that they are clear and open to critical scrutiny by others. This definition of scientific research does state a number of specific epistemic desiderata (which are commonly found in various guises in review and evaluation criteria), but it is inclusive in the sense that it contains no specific ontological commitments regarding the possible or allowed subject matter, goals, nor methods of research. Current ontology research — if it reflects at all upon its positioning within science in general — often takes this position, however with more or less tacitly making a restriction to the CS domain and corresponding scientific priorities. Both this position and this restriction in our opinion need further scrutiny. 3.2 Natural Science Models Namely, there exist several other images of science that are more specific — and effectively narrower in terms of theoretical method but potentially broader in terms of empirical validation — than the one outlined above. They are also visible in ontology research and engineering and related fields. One influential view has its roots in the natural and ‘exact’ sciences, as concisely characterized in Figure 2.

Images of Natural Sciences G

G

G

G

Theory Theory ≈ formal math and its machinery G Fundamental “first” principles I I I I I

²

Axiomatic basis for theory (Euclid as classical role model) Conceptual organizational power (parsimony, Occam’s razor) Contrast with purely empirical, “phenomenological” models Abstract; distant from directly observable reality Often overlooked: many steps between principles and test in observable reality

Principle-based formal theory as core of scientific approach

Experiment Validation by controlled observation & experimentation ²

G G

Experimental method as core of scientific approach

Engineering (1) “Just” practical application of existing scientific knowledge I

G

Assumption: knowledge transfer is linear value chain

(2) Research using the scientific method, for problemsolving goals related to practice I

Assumptions: nonlinear value chain, & I Goals other than explanation can be part of science

Fig. 2. Images of Natural Science.

Upon closer inspection, the natural science model of research itself appears to be the co-existence of (at least) two distinct views on science. The first view, often labelled Baconian, is that controlled experimentation constitutes the core of the scientific method.

The second view (that one may justifiably call, say, ‘Einsteinian’) is that the scientific method is first of all about uncovering the (abstract) fundamental ‘first’ principles as the basis for universal theories and laws. These two views are not necessarily conflicting, but they do have to be recognized as clearly distinct. They operationally lead to emphasis on different kinds of research (visible in the scientific labour division) and show a corresponding ‘gap’ between theory and experiment: validation of theories (e.g., in elementary particle physics or cosmology) involves a lot of ‘bridging’ research that provides the construction of the knowledge how first principles ultimately are expressed or reflected in the observable real-world phenomena that we can test. 3.3 Engineering: Is Design Scientific Research? This image of science also plays a significant role in the received view of engineering (including CS). Upon one view, also cited in [19], engineering is the use of the scientific method (as outlined above) to practical problems. So, engineering is a form of science in which only the goal of research is different — problem solving rather than explanation and prediction — but the method is the same. In contrast, the more traditional view sees engineering as ‘just the application’ of existing scientific knowledge to practical problems (and so is accorded a lower scientific status). This is a view (also upon CS in general) that one encounters both in natural and social scientists, by the way. The traditional view is logically defensible only if two assumptions are made: (i) knowledge transfer forms a linear value chain; (ii) the possible goals of scientific research are limited, typically to explanation and prediction. Both assumptions are in our view very dubious, and can historically and empirically be shown to be incorrect. The more defensible view is that engineering is a form of science (if done according to the above-stated general criteria of relevance, systematicity, and transparency), but with different goals, namely, related to problem solving and guidance of practice. Accordingly, design or ‘solution proposal’ studies (as [19] calls them) can normally be part of scientific research [20] (for a recent similar argument in the area of Management Information Systems (MIS) see [21] — although the present authors consider Simon’s argument as very limited in view of the accepted current design science practice, including ontology design). However, what makes design scientific is not the act of designing itself, just as it is not the act of writing that makes research papers scientific. Following our ‘inclusive’ definition of scientific research above, it is the ensuing claims to new knowledge and their justification that makes design or problem-solving/solution proposals scientifically interesting. So, in design research it is mandatory that these knowledge claims are stated and argued for explicitly. 3.4 Social Science Models There are still other images of science that are relevant to ontology research (cf. Peter Mika’s viewpoint article [22]) that stem from social research, as summarized in Figure 3. A major distinction here is that between the ‘quantitative’ and ‘qualitative’ schools of methodological thought (for extensive overviews, good sources are Robson [13], Bryman [15]), and Babbie [23]).

Images of Social Sciences G

G

G

G

Natural Science model Theory ≈ (ideally) formal math and its machinery G “Quantitative” approach

“Interpretive” Humanities model Theory ≈ coherent conceptual system (in natural language) G “Qualitative” approach

I

Theory: network of empirical variables I Statistics I “Objective” stance I Predictive, explanatory G

Empirical research ²

Human as agent, subject Knowledge as social construct I “Subjective” stance I Explanatory, understanding I

G

Empirical research ²

Validation by controlled observation & experimentation I

I

Experimental method as core of scientific approach I Separation of context of discovery and justification (confirmation)

Interpretation by observation, interview, text/conversation and symbolic (inter)action analysis I

Subject/Context-inclusive methodology as core of scientific approach I Discovery and justification (confirmation) seen as cycle

Fig. 3. Images of Social Science.

The quantitative school bases itself upon what it calls the ‘natural science model’ of social research. Experimentation is seen as the gold standard of the scientific method, especially in its guise of the randomized controlled trial (RCT) and so-called quasiexperimentation. Characteristic for this school is the reductionist approach from complex conceptual frameworks and concepts (‘constructs’) to variables as its theoretical languange and, in evaluation and validation, its preference for statistical methods as the main route to empirical confirmation of hypotheses. (To put this image of science into a bit of perspective we observe that, based on our own experience in physics research, the average quantum physicist or theoretical chemist will not really recognize themselves in the natural science picture painted by quantitative social research (Figure 3): theory as formal but not founded upon or in search of underlying fundamental first principles.) Information Systems research as published in journals such as the MIS Quarterly, an important forum for much business school related research on IT, is strongly influenced by this quantitative school. Due to the traditional emphasis on empirical and confirmatory modes of research, its mainstream is far from considering engineering and design issues as part of scientific research, and as a result shows no tendency to propose solutions for the problems it considers. This explains why doing so [21] is seen as kind of a recent breakthrough. Cohen’s work on Empirical Methods for Artificial Intelligence [24] is in fact to a significant extent a translation of the school thought and empirical testing methodology of quantitative social research to AI and CS. The qualitative school also has a focus on empirical research but emphasizes that (unlike the natural sciences) it is not so much the outside and context-free view of the researcher/observer that is important in explaining the world, but the context-inclusive interpretation and meaning that people themselves attach to their social world. Works in Artificial Intelligence that are close to the ‘interpretive’ or ‘social constructivist’

approach to science are for example the volumes Knowledge Acquisition as Modeling edited by Ken Ford and Jeffrey Bradshaw [25], and Expertise in Context edited by Feltovich et al. [26]. Characteristic for the qualitative and interpretive approach are methods such as interview, focus group, observation, ethnography, action research, case study — methods also widely used in IS research and practice, and that are to a lesser extent employed in ontology research [6, 5]. In sum, our research is being influenced by several schools of methodological thought that make very different and sometimes even conflicting assumptions on what scientific research is. These assumptions do shape the general approach to three salient issues : (i) how evaluation and validation is done; (ii) the attitude towards engineering and design and, more generally, the relationship between research and practice; (iii) the nature and role accorded to conceptualization and theory. Therefore, we believe it is important to be aware of and clear in the underlying but often tacit assumptions made in researching, publishing and reviewing in the (theoretical as well as applied) ontology field.

4 Issues of Evaluation and Validation Evaluation and validation are important general aspects and criteria of scientific research. If we are to take the dual reference, and especially the content reference, aspect of ontologies seriously, a much stronger emphasis on empirical ontology validation is called for. This is a KE theme recurring in the work of Tim Menzies, e.g. [27], and it is also the thrust of Cohen’s [24] call to AI. We believe that these authors make a point that is very valid in current ontology research. There are however different ways how stronger forms of validation can be implemented in research. Davis and Hickey [28] suggest to look at medicine as a model for how to do evaluation and validation research in a way that succeeds in establishing a clear relationship with practice in the field. The way they see this is quite close to Cohen’s approach [24]. Undoubtedly, there is much to learn from other disciplines. But it is quite another matter whether they are able to supply the proper research paradigm for information systems and ontology research. 4.1 Varieties of Validity I For example, evidence-based medical research tends to be quite strongly informed by the quantitative school, witness standard widely used textbooks such as [14]. Apart from the question whether this isn’t too narrow as a general position, it is also questionable whether this would adequately cover the issues specific to IS, RE and ontology research and practice — including the IT-oriented engineering and design aspects [21], the many conceptual information modelling and analysis techniques en vogue such as UML [29, 27], or the (often qualitative) ways practitioners actually work with their customers [28, 1]. Our own experience with the e 3 -value ontology approach in collaborative work with medical care researchers ([17]) suggests that this is not the case. Also the viewpoint

article by Davis and Hickey themselves [28] is much less narrow in this respect: positioning “know thy customer” as a first rule, scenarios, human communication analysis, and “situational research” all sit very well with qualitative-interpretive, case study, and action research approaches. The key difference is the context inclusiveness of these approaches, in contrast to the context-free scientific ideal of empirical confirmationist/falsificationist research [30], visible in much evidence-based medicine but also much empirical IS and AI research [21, 24]. However, ignoring or avoiding context is minimally a very (and in our view too) high price to pay for scientific research in ontologies, an important point that has been already made in older KE work [25, 26] but needs to be reiterated. 4.2 Varieties of Experimentation In other scientific disciplines there are different conceptions of validity; they generally link to different specific evaluative questions one has to ask concerning research designs and results, but they also appear to depend on both the goals and the methods used by the research. As the teleology of information systems and ontology research differs from both social and natural sciences, the field has to think for itself to develop an appropriate characterization of experimentation and empirical validation. Although it is customary to speak of the experimental method, there is a broad variety in experimentation relevant to our field. A useful categorization we suggest is: 1. Logico-mathematical demonstration (‘proof’): if a theory is sufficiently rigorously specified, certain desired properties may be strictly mathematically or logically derived. This is seen in natural sciences or economics (e.g. tendency to unique equilibrium), but is also relevant say in safety-critical IS (e.g. deadlock states are impossible to reach). Formal ontology and associated methods (from description logics to OntoClean) share similar idea(l)s. 2. Thought experiment: describes an idealized but not unrealistic pseudo-experimental scenario and then situationally explores the implications of conceptualizations and theories and their coherence (the debate between Einstein and Bohr on the completeness of quantum mechanics is a good example, see the Physical Review of 1936). In ontological analysis, walkthrough methods that employ mental or paper simulations of simple application scenarios is in our experience a very effective method. 3. Computational simulation and analysis: might be considered as the next stage of the thought experiment, whereby the computer has made it possible to run very high numbers of experiment scenarios and explore a large parameter space (a computer use widely found in natural sciences, engineering and social system simulation). 4. Laboratory experiment: the kind of experiment standard in natural sciences but also quantitative social research, based on clearly specified (e.g. causal) hypotheses, strictly controlled conditions, and minimized external context influence so as to come to unambiguous knowledge claims. 5. Field experiment: pilots of interventions (problem-solving/solution proposals), familiar from engineering but also medicine, education, social evaluation research,

where one attempts to demonstrate the usefulness of an intervention under realistic (context-inclusive) field conditions. Yin, a foremost author on the case study method [30] positions case study as such a form of field experiment. 6. Practice/experience-oriented field study: if certain social, professional, or scientific practices are established for a longer period, they can be observed and studied as empirical phenomena via both quantitative and qualitative modes of research (e.g. survey, observation, interview, ethnography). 4.3 Varieties of Validity II Given this strong variety of experimental forms, it doesn’t come as a surprise to find a similar wide range of differing notions concerning the validity aspects of knowledge claims. In quantitative research, we encounter construct, convergent and divergent validity etc., which are absent in qualitative research where we find instead, say, descriptive and interpretive validity. Campbell, reputed for his work on experimental methodology in the social sciences, makes some interesting observations here (see his Foreword to Yin’s case study methodology book [30]). He states that it is not experimentation per se that makes up the core of the scientific method, but rather the systematic treatment of plausible rival hypotheses, by investigating and testing the network of implications of the theories from which hypotheses under scrutiny originate. Thus, theories are tested rather than just individual hypotheses (Campbell calls this “ramification extinction”). From this perspective, it becomes clear why all of the above-mentioned approaches (1)-(6) are to be accepted as suitable scientific test methods for knowledge claims. But mainstream ontology research currently tends to employ a relatively limited repertoire of these evaluation and validation methods. A further step we want to make here is the observation that the validation of knowledge claims — whether theories, hypotheses, new conceptual methods, computational techniques, practice guidelines, or artefact designs — always boils down to the construction of rational communicative argument that must be defended and made credible. In scientific work, available empirical data and theory are systematically brought together such that knowledge claims follow in a step-by-step transparent process of rational reasoning. Therefore, we suggest that a more unifying framework for the variety of validity notions can be found in considering the general structure of argument. The general structure and rules of argument have been investigated in past years in philosophy [31], communication theory [32], and critical thinking and informal logic [33]. A simple model of the layout of arguments due to Toulmin [31] is suitable for our present purposes, and depicted in Figure 4. This model suggests a categorization of validity notions, with a checklist of criteria that are helpful to review the adequacy of the Figure 1 computational and especially content references of ontologies: 1. Descriptive validity D: do the supplied empirical (domain) data provide a truthful description of the situation or problem that is considered? 2. Theoretical validity T: are the employed theories or conceptual frameworks explicated and shown to be appropriate for the (domain) purpose? 3. Interpretive validity I: is the way in which all available data are mapped onto or interpreted in the employed theories or frameworks clear and adequate?

Data

( Reasons )

( Interpretation of Warrant Data into Theory ) ( Theory )

Backing

Claim Qualifications Restrictions

Fig. 4. The structure of argument.

4. Reasoning validity R: are all steps in the reasoning sound and, in addition, consistent and coherent with other knowledge that we possess? 5. Internal validity C int : are the claims made acceptable ‘beyond reasonable doubt’ within the situation or context (or sample) considered in the study? 6. External validity Cext : are any generalization claims that go beyond the studied situation sufficiently credible? The above categorizations of experimentation and of validity go beyond Cohen’s basically quantitative approach to empirical methods in AI. In Requirements Engineering [19] it is explicitly stated that evaluation and validation may use “both quantitative and qualitative research methods, ranging from controlled experiments to case study and action research”. This fits with context-inclusive research called for in KE and ontology emgineering [25, 26].

5 Concluding Remarks on the Research Agenda The call for a “focus on content” in ontology research by Nicola Guarino and Mark Musen [2] in their launching statement of the journal Applied Ontology has quite some implications and ramifications. In particular, we have argued that: – Ontology engineering offers a potential contribution to scientific method in general, as a fundamental approach to conceptualization and theory formation with new techniques valuable to many (non-CS) domains and disciplines. – This however requires that the content or substantive reference of ontologies is taken to heart by ontology engineers in addition to the common CS issues. – This goes against the not uncommon attitude that outside domains represent application research (that academically speaking has a lower rank than fundamental research). – Especially, this requires that issues and methods of empirical and practical validation of theories — as much further developed in other scientific disciplines — become more prominent and adopted in ontology engineering. In sum, taking application domain theories and the associated content reference of ontologies really seriously as first-class citizens in our research will actually increase

the contribution of ontology engineering to the development of scientific method in general. A further step to be taken in our view is that ontologies should develop from the current static representations of relatively stable domain content into actionable theoriesin-use (a concept stemming from organizational learning [34]). A possible way forward is to build in capabilities for self-organization of ontologies as service-oriented knowledge utilities (SOKUs) that can be delivered over the Web. But that is a story to be told another time. Acknowledgments. This work was partially supported by the European Networks of Excellence INTEROP and KnowledgeWeb, the Netherlands Ministry of Economic Affairs through the BSIK Freeband/FRUX project, and the VU∗ VUBIS centre at VUA. We are also grateful for the many interesting reactions by the Semantic Web meeting participants in Amsterdam, where a first version of the present views was discussed by the first author.

References 1. Offen, R.: Domain understanding is the key to successful system development. Requirements Engineering 7 (2002) 172–175 2. Guarino, N., Musen, M.: Applied ontology: Focusing on content. Applied Ontology 1(1) (2005) 1–5 3. Gruber, T.: A translation approach to portable ontology specifications. Knowledge Acquisition 5(2) (1993) 199–220 4. Mika, P., Akkermans, J.: Towards a new synthesis of ontology technology and knowledge management. The Knowledge Engineering Review 19(4) (2004) 317–345 DOI Online 11 November 2005. 5. Akkermans, J., Baida, Z., Gordijn, J., Pe˜na, N., Altuna, A., Laresgoiti, I.: Value webs: Using ontologies to bundle real-world services. IEEE Intelligent Systems 19(4) (2004) 57–66 6. Gordijn, J., Akkermans, J.: Value-based requirements engineering: Exploring innovative e-Commerce ideas. Requirements Engineering 8(2) (2003) 114–134 7. Evermann, J., Wand, Y.: Ontology based object-oriented domain modelling: Fundamental concepts. Requirements Engineering 10 (2005) 146–160 8. Kuhn, T.: The Structure of Scientific Revolutions. The University of Chicago Press, Chicago (1970) Second Edition. 9. Lakatos, I., Musgrave, A., eds.: Criticism and the Growth of Knowledge. Cambridge University Press, Cambridge (1970) 10. Jammer, M.: The Philosophy of Quantum Mechanics. Wiley-Interscience, New York (1974) 11. Feyerabend, P.: Against Method. Verso, London (1993) Third Edition. 12. Becker, H.: Tricks of the Trade — How to Think About Your Research While You’re Doing It. University of Chicago Press, Chicago (1998) 13. Robson, C.: Real World Research. Blackwell Publishers, Oxford, UK (2002) Second Edition. 14. Bowling, A.: Research Methods in Health: Investigating Health and Health Services. Open University Press, Maidenhead, Berkshire, UK (2002) Second Edition. 15. Bryman, A.: Research Methods and Organization Studies. Routledge, London, UK (1989) 16. Borst, W., Akkermans, J., Top, J.: Engineering ontologies. International Journal of HumanComputer Studies 46 (1997) 365–406 17. Baida, Z.: Software-Aided Service Bundling — Intelligent Methods and Tools for Graphical Service Modeling. PhD thesis, Free University Amsterdam VUA, Amsterdam, NL (2006) Also Freeband/FRUX BSIK project deliverable.

18. Toulmin, S.: Return to Reason. Harvard University Press, Cambridge, MA (2001) 19. Wieringa, R., Maiden, N., Mead, N., Rolland, C.: Requirements engineering paper classification and evaluation criteria: A proposal and a discussion. Requirements Engineering 11(1) (2006) 102–107 20. Simon, H.: The Sciences of the Artificial. The MIT Press, Cambridge, MA (1996) Third Edition. 21. Hevner, A., March, S., Park, J., Ram, S.: Design science research in information systems. MIS Quarterly 28(1) (2004) 75–105 22. Mika, P.: Social Networks and the Semantic Web: The Next Challenge. IEEE Intelligent Systems 20(1) (2005) 80–93 23. Babbie, E.: The Practice of Social Research. Wadsworth Publishing Company, Belmont, CA (1998) Eighth Edition. 24. Cohen, P.: Empirical Methods for Artificial Intelligence. The MIT Press, Cambridge, MA (1995) 25. Ford, K., Bradshaw, J., eds.: Knowledge Acquisition as Modeling. Wiley, New York, NY (1993) 26. Feltovich, P., Ford, K., Hoffman, R., eds.: Expertise in Context. AAAI Press / The MIT Press, Menlo Park, CA / Cambridge, MA (1997) 27. Menzies, T.: Model-based requirements engineering. Requirements Engineering 8 (2003) 193–194 28. Davis, A., Hickey, A.: Requirements researchers: Do we practice what we preach? Requirements Engineering 7(2) (2002) 107–111 29. Gemino, A., Wand, Y.: A framework for empirical evaluation of conceptual modeling techniques. Requirements Engineering 9 (2004) 248–260 30. Yin, R.: Case Study Research — Design and Methods. SAGE Publications, Thousand Oaks, CA (2003) Third Edition. 31. Toulmin, S.: The Uses of Argument. Cambridge University Press, Cambridge, UK (1958) Updated Edition 2003. 32. van Eemeren, F., Grootendorst, R.: A Systematic Theory of Argumentation — The PragmaDialectical Approach. Cambridge University Press, Cambridge, UK (2004) 33. Fisher, A.: The Logic of Real Arguments. Cambridge University Press, Cambridge, UK (2004) Second Edition. 34. Argyris, C.: Knowledge for Action. Jossey-Bass Publishers, San Francisco, CA (1993)