Artificial Intelligence

1 downloads 0 Views 146KB Size Report
advent of the computer has suggested new inductive tasks such as program synthesis from examples of input-output behaviour and knowledge discovery in ...
On the logic of induction Peter A. Flach* INFOLAB, Tilburg University, PObox 90153, 5000 LE Tilburg, the Netherlands

Abstract. This paper presents a logical analysis of induction. Contrary to common approaches to inductive logic that treat inductive validity as a real-valued generalisation of deductive validity, we argue that the only logical step in induction lies in hypothesis formation rather than evaluation. Inspired by the seminal paper of Kraus, Lehmann & Magidor [18] we analyse the logic of inductive hypothesis formation on the metalevel of consequence relations. Two main forms of induction are considered: explanatory induction, aimed at inducing a general theory explaining given observations, and confirmatory induction, aimed at characterising completely or partly observed models. Several sets of meta-theoretical properties of inductive consequence relations are considered, each of them characterised by a suitable semantics. The approach followed in this paper is extensively motivated by referring to recent and older work in philosophy, logic, and Machine Learning.

1.

Introduction

This paper is an attempt to develop a logical account of inductive reasoning, one of the most important ways to synthesize new knowledge. Induction provides an idealized model for empirical sciences, where one aims to develop general theories that account for phenomena observed in controlled experiments. It also provides an idealized model for cognitive processes such as learning concepts from instances. The advent of the computer has suggested new inductive tasks such as program synthesis from examples of input-output behaviour and knowledge discovery in databases, and the application of inductive methods to Artificial Intelligence problems is an active research area, which has displayed considerable progress over the last decades. On the foundational side, however, our understanding of the essentials of inductive reasoning is fragmentary and confused. Induction is usually defined as inference of general rules from particular observations, but this slogan can hardly count as a definition. Clearly some rules are better than others for given observations, while yet other rules are totally unacceptable. A logical account of induction should shed more light on the relation between observations and hypotheses, much like deductive logic formalises the relation between theories and their deductive consequences. This is by no means an easy task, and anyone claiming to provide a definitive solution should be approached sceptically. The main contribution of this paper lies in the novel perspective that is obtained by combining older work in philosophy of science with a methodology suggested by recent work in formalising nonmonotonic reasoning. This perspective provides us with a descriptive — rather than prescriptive — account of induction, which clearly indicates both the opportunities for and limitations of logical analysis when it comes to modelling induction.

1.1 Problem formulation and approach I should start by stressing that the study reported on in this paper should be perceived as an application of logical analysis to problems in Artificial Intelligence. Thus, we will take it for granted that there exists a distinct and useful form of reasoning called induction. As a model for this form of reasoning we may take the approaches to learning classification rules from examples that can be found in the Machine Learning literature, or the work on inducing Prolog programs and first-order logical theories from examples in the recently established discipline of Inductive Logic Programming. By taking this position we will avoid the * E-mail: [email protected].

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

controversies abounding in philosophy of science as to whether or not science proceeds by inductive methods. This is not to say that I will completely ignore philosophical considerations — in fact, my approach has been partly motivated by works from the philosophers Charles Sanders Peirce and Carl G. Hempel, as I will explain shortly. The main question addressed in this paper is the following: Can we develop a logical account of induction that is sufficiently similar to the modern account of deduction? By ‘the modern account of induction’ I mean the by now standard approach, developed in the first half of this century, of defining a logical language, a semantical notion of deductive consequence, and a proof system of axioms and inference rules operationalising the relation of deductive consequence. By the stipulation that the logical account of induction be ‘sufficiently similar’ to this modern account of deduction I mean that the former should likewise consist of a semantical notion of inductive consequence, and a corresponding proof system. Those perceiving logic as the ‘science of correct reasoning’ will now object that what I am after is a deductive account of induction, and it is known already since Hume that inductive hypotheses are necessarily defeasible. My reply to this objection is that it derives from a too narrow conception of logic. In my view, logic is the science of reasoning, and it is the logician’s task to develop formal models of every form of reasoning that can be meaningfully distinguished. In developing such formal models for nondeductive reasoning forms, we should keep in mind that deduction is a highly idealized and restricted reasoning form, and that we must be prepared to give up some of the features of deductive logic if we want to model reasoning forms that are less perfect, such as induction. The fundamental question then is: which features are inherent to logic per se, and which are accidental to deductive logic? To illustrate this point, consider the notion of truth-preservation: whenever the premisses are true, the conclusion is true also. It is clear that truth-preservation must be given up as soon as we step out of the deductive realm. The question then arises whether a logical semantics is mainly a tool for assessing the truth of the conclusion given the truth of the premisses, or whether its main function is rather to define what property is preserved when passing from premisses to conclusion. We will address this and similar fundamental questions in this paper. Another objection against the approach I propose could be that deductive logic is inherently prescriptive: it clearly demarcates the logical consequences one should accept on the basis of given premisses, from the ones one should not accept. Clearly, our understanding of induction is much too limited to be able to give a prescriptive account of induction. My reply to this objection is that, while such a demarcation is inherent to logic, its interpretation can be either prescriptive or descriptive. The inductive logics I propose in this paper distinguish between hypotheses one should not accept on the basis of given evidence, relative to a certain goal one wants the hypothesis to fulfil, and hypothesis one might accept. Put differently, these inductive logics formalise the logic of inductive hypothesis formation rather than hypothesis selection, which I think is the best one can hope to achieve by purely logical means. The objective pursued in this paper, then, is to develop semantics and proof systems for inductive hypothesis formation. What is new here is not so much this objective, which has been pursued before (see e.g. [4]), but the meta-theoretical viewpoint taken in this paper, which I think greatly benefits our understanding of the main issues. This meta-theoretical viewpoint has been inspired by the seminal paper of Kraus, Lehmann & Magidor [18], where it is employed to unravel the fundamental properties of nonmonotonic reasoning. Readers familiar with the paper of Kraus et al. may alternatively view the present paper as a constructive proof of the thesis that their techniques in fact establish a methodology, by demonstrating how they can be successfully applied to analyse a rather different form of reasoning.

1.2 Plan of the paper The paper is structured as follows. In section 2 the philosophical, logical, and Machine Learning backgrounds of this paper are surveyed. Section 3 introduces the main logical tool employed in this paper: the notion of a metalevel consequence relation. Sections 4 and 5 form the technical core of this paper, stating representation theorems characterising sets of metalevel properties of explanatory induction and confirmatory induction, respectively. In section 6 we discuss the implications of the approach taken and results obtained in this paper. Section 7 repeats the main conclusions.

2

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

2.

Backgrounds

This section reviews a number of related approaches from the philosophical, logical, and Machine Learning literature. With such a complex phenomenon as induction, one cannot hope to give an overview that can be called complete in any sense — I will restrict attention to those approaches that either can be seen as precursors to my approach, or else are considered as potential answers to my objectives but rejected upon closer inspection. We start with the latter.

2.1 Inductive probability By now it is commonplace to draw a connection between inductive reasoning and probability calculus. Inductive or subjective probability assesses the degree to which an inductive agent is willing to accept a hypothesis on the basis of available evidence. A socalled posterior probability of the hypothesis after observing the evidence is obtained by applying Bayes’ theorem to the probability of the hypothesis prior to observation. Rudolf Carnap has advocated the view that inductive probability gives rise to a system of inductive logic [3]. Briefly, Carnap defines a function c(H,E) assigning a degree of confirmation (a number between 0 and 1) to a hypothesis H on the basis of evidence E. This function generalises the classical notion of logical entailment — which can be seen as a ‘confirmation function’ from premisses and conclusion to {0,1} — to an inductive notion of ‘partial entailment’: ‘What we call inductive logic is often called the theory of nondemonstrative or nondeductive inference. Since we use the term ‘inductive’ in the wide sense of ‘nondeductive’, we might call it the theory of inductive inference... However, it should be noticed that the term ‘inference’ must here, in inductive logic, not be understood in the same sense as in deductive logic. Deductive and inductive logic are analogous in one respect: both investigate logical relations between sentences; the first studies the relation of [entailment], the second that of degree of confirmation which may be regarded as a numerical measure for a partial [entailment]... The term ‘inference’ in its customary use implies a transition from given sentences to new sentences or an acquisition of a new sentence on the basis of sentences already possessed. However, only deductive inference is inference in this sense.’ [3, §44B, pp.205–6]

This citation succinctly summarises why inductive probability is not suitable, in my view, as the cornerstone of a logic of induction. My two main objections are the following. Inductive probability treats all nondeductive reasoning as inductive. This runs counter to one of the main assumptions of this paper, namely that induction is a reasoning form in its own right, which we want to characterise in terms of properties it enjoys rather than properties it lacks. A more practical objection is that a single logical foundation for all possible forms of nondeductive reasoning is likely to be rather weak. Indeed, I would argue that in many forms of reasoning the goal that is to be fulfilled by the hypothesis, such as explaining the observations, is not reducible to a degree of confirmation. 1 Inductive probability, taken as partial entailment, leads to a degenerated view of logic. This is essentially what Carnap notes when he states that his inductive logic does not establish inference in the same sense as deductive logic (although he would not call it a degeneration). This means that, for instance, the notion of a proof reduces to a calculation of the corresponding degree of confirmation. A possible remedy is to define and axiomatise a qualitative relation of confirmation, such as the relation defined by qc(H,E) ⇔ c(H,E)>c(H,true). However, such a qualitative relation of confirmation can also be postulated without reference to numerical degrees of confirmation, which would give us much more freedom to investigate the relative merits of different axiom systems. In fact, this is the course of action taken by Hempel, as we will see in the next section. I should like to stress that it is not inductive probability or Bayesian belief measures as such which are criticised here — on the contrary, I believe these to be significant approaches to the important problem of how to update an agent’s beliefs in the light of new information. Since belief measures express the agent’s subjective estimates of the truth of hypotheses, let us say that inductive probability and related approaches establish a truth-estimating procedure. My main point is that such truth-estimating procedures are, generally speaking, complementary to logical systems. Truth-estimating procedures 1Note that degree of confirmation is not a quantity that is simply to be maximised, since this would lead us straight

back into deductive logic.

3

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

answer a type of question which nondeductive logical systems, in general, cannot answer, namely: how plausible is this hypothesis given this evidence? The fact that deductive logic incorporates such a truthestimating procedure is accidental to deductive reasoning; the farther one moves away from deduction, the less the logical system has to do with truth-estimation. For instance, the gap between logical systems for nonmonotonic reasoning and truth-estimating procedures is much smaller than the gap between the latter and logical systems for induction. Indeed, one may employ the same truth-estimating procedure for very different forms of reasoning.

2.2 Confirmation as a qualitative relation Carl G. Hempel [15, 16] developed a qualitative account of induction that will form the basis of the logical system for what I call confirmatory induction (section 5). Carnap rejected Hempel’s approach, because he considered a quantitative account of confirmation as more fundamental than a qualitative acount. However, as explained above I think that the two are conceived for different purposes: a function measuring degrees of confirmation can be used as a truth-estimating procedure, while a qualitative relation of confirmation can be used as the cornerstone for a logical system. I also consider the two as relatively independent: a qualitative confirmation relation that cannot be obtained from a numerical confirmation function is not necessarily ill-conceived, as long as the axioms defining the qualitative relation are intuitively meaningful. Hempel’s objective is to develop a material definition of confirmation. Before doing so he lists a number of adequacy conditions any such definition should satisfy. Such adequacy conditions can be seen as metalevel axioms, and we will discuss them at some length. The following conditions can be found in [16, pp.103–106, 110]; logical consequences of some of the conditions are also stated. (H1)

Entailment condition: any sentence which is entailed by an observation report is confirmed by it.

(H2)

Consequence condition: if an observation report confirms every one of a class K of sentences, then it also confirms any sentence which is a logical consequence of K. (H2.1) (H2.2) (H2.3)

(H3)

Consistency condition: every logically consistent observation report is logically compatible with the class of all the hypotheses which it confirms. (H3.1) (H3.2)

(H4)

Special consequence condition: if an observation report confirms a hypothesis H, then it also confirms every consequence of H. Equivalence condition: if an observation report confirms a hypothesis H, then it also confirms every hypothesis which is logically equivalent with H. Conjunction condition: if an observation report confirms each of two hypotheses, then it also confirms their conjunction.

Unless an observation report is self-contradictory, it does not confirm any hypothesis with which it is not logically compatible. Unless an observation report is self-contradictory, it does not confirm any hypotheses which contradict each other.

Equivalence condition for observations: if an observation report B confirms a hypothesis H, then any observation report logically equivalent with B also confirms H.

The entailment condition (H1) simply means that entailment ‘might be referred to as the special case of conclusive confirmation’ [16, p.107]. The consequence conditions (H2) and (H2.1) state that the relation of confirmation is closed under weakening of the hypothesis or set of hypotheses ( H1 is weaker than H2 iff it is logically entailed by the latter). Hempel justifies this condition as follows [16, p.103]: ‘an observation report which confirms certain hypotheses would invariably be qualified as confirming any consequence of those hypotheses. Indeed: any such consequence is but an assertion of all or part of the combined content of the original hypotheses and has therefore to be regarded as confirmed by any evidence which confirms the original hypotheses.’ Now, this may be reasonable for single hypotheses (H2.1), but much less so for sets of hypotheses, each of which is confirmed separately. The culprit can be identified as (H2.3), which together with (H2.1) implies (H2). A similar point can be made as regards the consistency condition (H3), about which Hempel remarks that it ‘will perhaps be felt to embody a too

4

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

severe restriction’. (H3.1), on the other hand, seems to be reasonable enough; however, combined with the conjunction condition (H2.3) it implies (H3). We thus see that Hempel’s adequacy conditions are intuitively justifiable, except for the conjunction condition (H2.3) and, a fortiori, the general consequence condition (H2). On the other hand, the conjunction condition can be justified by a completeness assumption on the evidence, as will be further discussed in section 5. We close this section by noting that Hempel’s material definition of the relation of confirmation of a hypothesis by evidence roughly corresponds to what we would nowadays call ‘truth of the hypothesis in the truth-minimal Herbrand model of the evidence’. We will return to material definitions of qualitative confirmation in section 2.5.

2.3 Abduction Predating Hempel’s work on confirmation by almost half a century is the work of Charles Sanders Peirce on abduction: the process of forming explanatory hypotheses, which I will briefly discuss in this section. In a series of lectures on Pragmatism delivered in 1903, Peirce distinguishes three types of reasoning: deduction, induction, and abduction. Induction ‘consists in starting from a theory, deducing from it predictions of phenomena, and observing those phenomena in order to see how nearly they agree with the theory’. Furthermore, ‘The justification for believing that an experiental theory which has been subjected to a number of experimental tests will be in the near future sustained about as well by further such tests as it has hitherto been, is that by steadily pursuing that method we must in the long run find out how the matter really stands.’ [13, 5.170]

Note that Peirce claims, like Carnap, that induction evaluates the plausibility of a given theory, rather than constructing that theory from observations. However, inductive hypotheses do not come out of the blue, and this is where abduction comes into play: ‘Abduction is the process of forming an explanatory hypothesis. It is the only logical operation which introduces any new idea; for induction does nothing but determine a value, and deduction merely evolves the necessary consequences of a pure hypothesis. Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be. Its only justification is that from its suggestion deduction can draw a prediction which can be tested by induction, and that, if we are ever to learn anything or to understand phenomena at all, it must be by abduction that this is to be brought about. No reason whatsoever can be given for it, as far as I can discover; and it needs no reason, since it merely offers suggestions.’ [13, 5.171]

In other words, abduction is the process of conjecturing inductive hypotheses, constrained by the requirement that they should comply with the available observations. Abduction represents the purely logical part of inductive reasoning.2 Peirce proceeds by defining the logical form of abduction. ‘It must be remembered’, he writes, ‘that abduction, although it is very little hampered by logical rules, nevertheless is logical inference, asserting its conclusion only problematically or conjecturally, it is true, but nevertheless having a perfectly definite logical form.’ Peirce then defines this logical form, as follows. 2Unfortunately, the term ‘abduction’ is nowadays used in two different ways. Peirce himself is to blame at least

partly for this confusion, sine he first proposed a rather different, syllogistic classification of reasoning forms, which can be summarized as follows. Consider the Aristotelian syllogism Barbara: ‘All the beans from this bag are white; these beans are from this bag; therefore, these beans are white’. Now there are two ways to exchange the conclusion with one of the premisses, one resulting in the inductive syllogism ‘These beans are white; these beans are from this bag; therefore, all the beans from this bag are white’, the other in ‘All the beans from this bag are white; these beans are white; therefore, these beans are from this bag’. Peirce refers to this latter syllogism as (forming a) hypothesis. This syllogistic theory has to a large extent been adopted in the discipline of logic programming, where abduction (ironically, the term was only introduced in Peirce’s later theory) is generally perceived as the inference of ground facts from rules and a query that is to be explained. Notice that a logic based on entailment rather than syllogisms is unable to distinguish between the two latter syllogisms, which both embody a form of reversed deduction. See [10].

5

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

‘Long before I first classed abduction as an inference it was recognized by logicians that the operation of adopting an explanatory hypothesis — which is just what abduction is — was subject to certain conditions. Namely, the hypothesis cannot be admitted, even as a hypothesis, unless it be supposed that it would account for the facts or some of them. The form of inference, therefore, is this: The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true. Thus, A cannot be abductively inferred, or if you prefer the expression, cannot be abductively conjectured until its entire content is already present in the premiss, “If A were true, C would be a matter of course.” ’ [13, 5.188]

In short, the view of induction that Peirce offers here is this. Inductive reasoning consists of two steps: (i) formulating a conjecture, and (ii) evaluating the conjecture. Both steps take the available evidence into account, but in quite different ways and with different goals. The first step requires that the conjectured hypothesis explains the observations; having a definite logical form, it represents a form of inference. The second step evaluates how well predictions offered by the hypothesis agree with reality; it is not inference, but assigns a numerical value to a hypothesis. In order to avoid terminological problems, I will not use Peirce’s terminology and refer to the first step as explanatory hypothesis formation, and to the second as hypothesis evaluation or validation. Leaving a few details aside, Peirce’s definition of explanatory hypothesis formation can be formalised as the inference rule C , A =C A In this paper I propose to generalise Peirce’s definition by including the relation of ‘is explained by’ as a parameter. This is achieved by lifting the explanatory inference from C to A to the metalevel, as follows: A =C C< | A The symbol |< stands for the explanatory consequence relation. Axiom systems for this relation will be considered in section 4.

2.4 Confirmation vs. explanation We now have encountered two fundamental notions that play a role in inductive hypothesis formation: one is that the hypothesis should be confirmed by the evidence, the other that the hypothesis should explain the evidence. Couldn’t we try and build the requirement that the hypothesis be explanatory into our definition of confirmed hypothesis? The problem is that an unthoughtful combination of explanation and confirmation can easily lead into paradox. Let H1 and H2 be two theories such that the latter includes the former, in the sense that everything entailed by H1 is also entailed by H2. Suppose E is confirming evidence for H1; shouldn’t we conclude that it confirms H2 as well? To borrow an example of Hempel: ‘Is it not true, for example, that those experimental findings which confirm Galileo’s law, or Kepler’s laws, are considered also as confirming Newton’s law of gravitation?’ [16, p.104]. This intuition is formalised by the following condition: (H5)

Converse consequence condition: if an observation report confirms a hypothesis H, then it also confirms every formula logically entailing H.

The problem is, however, that this rule is incompatible with the special consequence condition (H2.1). This can be seen as follows: in order to demonstrate that E confirms H for arbitrary E and H, we note that E confirms E by (H1), so by the converse consequence condition E confirms E∧H; but then E confirms H by (H2.1). Thus, we see that the combination of two intuitively acceptable conditions leads to a collapse of the system into triviality, a clearly paradoxical situation. Hempel concludes that one cannot have both (H2.1) and (H5), and drops the latter. His justification of

6

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

this decision is however unconvincing, which is not surprising since neither is a priori better than the other: they formalise different intuitions. While (H2.1) formalises a reasonable intuition about confirmation, (H5) formalises an equally reasonable intuition about explanation: (H5′) if an observation report is explained by a hypothesis H, then it is also explained by every formula logically entailing H. In this paper I defend the position that Hempel was reluctant to take, namely that with respect to inductive or scientific hypothesis formation there is more than one possible primitive notion: the relation ‘confirms’ between evidence and hypothesis, and the relation ‘is explained by’. Each of these primitives gives rise to a specific form of induction. This position is backed up by recent work in Machine Learning, to which we will turn now.

2.5 Inductive Machine Learning Without doubt, the most frequently studied induction problem in Machine Learning is concept learning from examples. Here, the observations take the form of descriptions of instances (positive examples) and non-instances (negative examples) of an unknown concept, and the goal is to find a definition of the concept that correctly discriminates between instances and non-instances. Notice that this problem statement is much more concrete than the general description of induction as inference from the particular to the universal: once the languages, in which instances and concepts are described, are fixed, the desired relation between evidence and hypothesis is determined. A natural choice is to employ a predicate for the concept to be learned, and to use constants to refer to instances and non-instances. In this way, a classification of an instance can be represented by a truthvalue, which can be obtained by setting up a proof.3 We then obtain the following general problem definition: Problem: Concept learning from examples in predicate logic. Given: (1) A predicate-logical language. (2) A predicate representing the target concept. (3) Two sets P and N of ground literals of this predicate, representing the positive and negative examples. (4) A background theory T containing descriptions of instances. Determine: A hypothesis H within the provided language such that (i) for all p∈P: T∪H = p; (ii) for all n∈N: T∪H =/ n. Notice that condition (ii) is formulated in such a way that the hypothesis only needs to contain sufficient conditions for concept membership (since a negative classification is obtained by negation as failure). This suggests an analogy between concept definitions and Horn clauses, which can be articulated by allowing (possibly recursive) logic programs as hypotheses and background knowledge, leading us into the field of Inductive Logic Programming (ILP) [22]. Furthermore, P and N may contain more complicated formulae than ground facts. The general problem statement then becomes: given a partial logic program T, extend it with clauses H such that every formula in P is entailed and none of the formulae in N. The potential for inductive methods in Artificial Intelligence is however not exhausted by classification-oriented approaches. Indeed, it seems fair to say that most knowledge implicitly represented by extensional databases is non-classificatory. Several researchers have begun to investigate nonclassificatory approaches to knowledge discovery in databases. For instance, in previous work I have demonstrated that the problem of inferring the set of functional and multivalued attribute dependencies satisfied by a database relation can be formulated as an induction problem [6, 7, 8]. Furthermore, De Raedt & Bruynooghe have generalized the classificatory ILP-setting in order to induce non-Horn clauses from ground facts [5]. Both approaches essentially employ the following problem statement.

3The alternative is to represent concepts by open formulae, and to operationalize classification by means of

subsumption.

7

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

Problem: Non-classificatory induction. Given: (1) A predicate-logical language. (2) Evidence E. Determine: A hypothesis H within the provided language such that: (i) H is true in a model m0 constructed from E; (ii) for all g within the language, if g is true in m0 then H = g. Essentially, the model m0 employed in the approaches by De Raedt & Bruynooghe and myself is the truth-minimal Herbrand model of the evidence.4 The hypothesis is then an axiomatisation of all the statements true in this model, including non-classificatory statements like ‘everybody is male or female’ and ‘nobody is both a father and a mother’. The relation between the classificatory and non-classificatory approaches to induction is that they both aim at extracting similarities from examples. The classificatory approach to induction achieves this by constructing a single theory that entails all the examples. In contrast, the non-classificatory approach achieves this by treating the examples as a model — a description of the world that may be considered complete, at least for the purposes of constructing inductive hypotheses. The approach is justified by the assumption that the evidence expresses all there is to know about the individuals in the domain. Such a completeness assumption is reminiscent of the Closed World Assumption familiar from deductive dabases, logic programming, and default reasoning — however, in the case of induction its underlying intuition is quite different. As Nicolas Helft, one of the pioneers of the non-classificatory approach, puts it: ‘induction assumes that the similarities between the observed data are representative of the rules governing them (…). This assumption is like the one underlying default reasoning in that a priority is given to the information present in the database. In both cases, some form of “closing-off” the world is needed. However, there is a difference between these: loosely speaking, while in default reasoning the assumption is “what you are not told is false”, in similarity-based induction it is “what you are not told looks like what you are told”.’ [14, p.149]

There is a direct connection between Peirce’s conception of abduction as formation of explanatory hypotheses and the classificatory induction setting, if one is willing to view a theory that correctly classifies the examples as an explanation of those examples. In this paper I suggest to draw a similar connection between Hempel’s conception of confirmation as a relation between evidence and potential hypotheses and the non-classificatory induction setting outlined above. Non-classificatory induction aims at constructing hypotheses that are confirmed by the evidence, without necessarily explaining them. Rather than studying material definitions of what it means to explain or be confirmed by evidence, as is done in the works referred to above, in the following sections I will be concerned with the logical analysis of the abstract notions of explanation and confirmation.

3.

Inductive consequence relations

In the sections to follow I employ the notion of a consequence relation, originating from Tarski [24] and further elaborated by Gabbay [11], Makinson [20], and Kraus, Lehmann & Magidor [18, 19]. In this section I give an introduction to this important metalogical tool that is largely self-contained. The basic definitions are given in section 3.1. In section 3.2 I consider some general properties of consequence relations that arise when modelling the reasoning behaviour of inductive agents. Section 3.3 is devoted to a number of considerations regarding the pragmatics of consequence relations in general, and inductive consequence relations as used in this paper in particular.

3.1 Consequence relations We distinguish between the language L in which an inductive agent formulates premisses and conclusions of inductive arguments, and the metalanguage in which statements about the reasoning behaviour of the

4An alternative approach is to consider the information-minimal partial model of the evidence [8].

8

Peter A. Flach: On the logic of induction (submitted, June 5, 1996)

inductive agent are expressed. In this paper L is a propositional language5 over a fixed countable set of proposition symbols, closed under the usual logical connectives. We assume a set of propositional models U, and a satisfaction relation = ⊆ U×L that is well-behaved with respect to the logical connectives and compact. As usual, we write =α for ∀m∈U: m=α, for arbitrary α∈L. Note that U may be a proper subset of the set of all truth-assignments to proposition symbols in L, which would reflect prior knowledge or background knowledge of the inductive agent. Equivalently, we may think of U as the set of models of an implicit background theory T, and let =α stand for ‘α is a logical consequence of T’. The metalanguage is a restricted predicate language built up from a unary metapredicate = in prefix notation (standing for validity with respect to U in L) and a binary metapredicate |< in infix notation (standing for inductive consequence). In referring to object-level formulae from L we employ a countable set of metavariables α, β, γ, δ, …, the logical connectives from L (acting like function symbols on the metalevel), and the metaconstants true and false. Formulae of the metalanguage, usually referred to as rules or properties, are of the form P1 , …, P n / Q for n≥0, where P1 , …, P n and Q are literals (atomic formulae or their negation). Intuitively, such a rule should be interpreted as an implication with antecedent P1 , …, Pn (interpreted conjunctively) and consequent Q, in which all variables are implicitly universally quantified. An example of such a rule, written in an expanded Gentzen-style notation, is

=α∧β→γ , α < | β α∧¬γ |