IST Rainer ResearchGate

0 downloads 0 Views 880KB Size Report
framework and methodology with illustrative examples from a blog post written by a ... The empirical software engineering research community values the collection ..... of a line of code from a low-level programming language or a measure of the maturity of ...... different participants, as does StackExchange or StackOverflow.
Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Using argumentation theory to analyse software practitioners’ defeasible evidence, inference and belief Austen Rainer Department of Computer Science and Software Engineering University of Canterbury Christchurch New Zealand [email protected] ABSTRACT Context: Software practitioners are often the primary source of information for software engineering research. They naturally produce information about their experiences of software practice, and the beliefs they infer from their experiences. Researchers must evaluate the quality and quantity of this information for their research. Objective: To examine how concepts and methods from argumentation research can be used to study practitioners’ evidence, inference and beliefs so as to better understand and improve software practice. Method: We develop a preliminary framework and preliminary methodology, and use those to identify, extract and structure practitioners’ evidence, inference and beliefs. We illustrate the application of the framework and methodology with examples from a practitioner’s blog post. Result: The practitioner uses (factual) stories, analogies, examples and popular opinion as evidence, and uses that evidence in defeasible reasoning to justify his beliefs and to rebut the beliefs of other practitioners. Conclusion: The framework, methodology and examples could provide a foundation for software engineering researchers to develop a more sophisticated understanding of, and appreciation for, practitioners’ defeasible evidence, inference and belief. Further work needs to automate (parts of) the methodology to support larger-scale application of the methodology. Keywords behavioural software engineering; evidence; experience; story; argumentation; explanation; analogy; software practice; qualitative analysis; evidence based software engineering.

1 Introduction Software practitioners are often the primary source of information for software engineering research. They are respondents to questions in surveys and interviews, participants in case studies, and subjects in experiments. Practitioners observe the world, maintain beliefs about the world, share information with each other about the world, and act to change the world; and with all four of these attributes practitioners possess the ability to choose (to some degree) their observations, beliefs, information sharing and actions. Practitioners are not just passive conveyors of information about the world, but are instead active generators of that information.



1

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Practitioners naturally produce a very large volume of information during the course of their practice e.g. email exchanges, technical reports, blog posts. The (variable) quality and quantity of this information has the potential to provide valuable insights into software engineering practice, and also into the attributes of the practitioners themselves i.e. their observation, belief, information sharing, action and choice making about software practice. In particular, the information is valuable for its insights into practitioners’ evidence, inferences and beliefs that, in turn, relate to three areas of significance for software engineering research: evidence, arguments and explanations. Like researchers, practitioners are also fallible: practitioners’ observations, beliefs, information sharing, actions and choice making are often unreliable in some way (although not intentionally deceptive). Practitioners therefore have a distinctive role in the research process, as fallible generators of information about software engineering practice. (An additional set of challenges relates to the nature of software engineering itself: as a highly complex, distributed, only partially visible, rapidly changing, multi-agent cognitive activity. Those additional challenges lie outside the scope of this paper.) In relying on software practitioners as a primary source of information for software engineering research, researchers have the difficult task of evaluating the quality and quantity of practitioners’ information, or a subset of that information. This paper aims to examine how concepts and methods from argumentation research can be used to study practitioners’ evidence, inference and beliefs so as to better understand and improve software practice. To scope our investigations, we focus at this stage on blog posts written by software practitioners, and treat these posts as a type of testimonial evidence from (expert) witnesses. We present our progress on the development of a preliminary conceptual framework and a preliminary methodology. The framework relates information, evidence, inference and belief; proposes a set of evidential tests for information; and uses argumentation schemes to infer conclusions (beliefs) from other beliefs or from evidence, particularly evidence based on personal experience. We demonstrate the framework and methodology with illustrative examples from a blog post written by a very experienced software practitioner. We briefly consider how the illustrative examples can be integrated with research. Further research will need to automate (parts of) the methodology to support application of the methodology to larger amounts of naturally produced information. A motivation for the development of the framework and the preliminary methodology – and also the use of argumentation schemes – is the belief that the software engineering research community would benefit from a greater appreciation of the range of reasoning and knowledge that practitioners use; that research can contribute by helping practitioners and researchers better judge the cases where such reasoning and knowledge is strong, weak or fallacious; and that research can help strengthen weaker reasoning and knowledge, and help to expose fallacious reasoning. Research has developed a number of approaches to evaluating practitioners’ information, for example protocols, triangulation, replication and systematic review. The work we report here is intended to complement and extend existing approaches in three areas of software engineering research: • Analyses of qualitative data [1]: for example, the framework and methodology can be used to develop interview protocols to gain deeper insights into the ‘internal structure’ of practitioners’ reasoning, in particular the way they justify, through reasoning, their beliefs.



2

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

• Behavioural Software Engineering [2]: for example, the framework and methodology can be used to compare and triangulate the reasoning processes of different individuals and groups, for example the way that practitioners seek to persuade others, or evaluate and decide on technologies to adopt. • Evidence Based Software Engineering (EBSE) [3][4]: for example, the argumentation schemes in the methodology could contribute to guidelines for integrating best evidence from research with practical experience and human values. The remainder of the paper is organised as follows. Section 2 reviews prior work in software engineering on evidence and its sources, on events, stories and explanations, and on argumentation. The section also briefly considers a body of work from law and argumentation studies that underpin the framework and methodology. Section 3 develops the conceptual framework. Section 4 introduces argumentation schemes as patterns of inference from different types of evidence to conclusions. Section 5 presents the preliminary methodology. Section 6 presents illustrative examples of the application of the methodology and the evidence, inferences and beliefs of practitioners. Section 7 critically reflects on the work and summarises the contributions. The appendix provides detail on the methodology.

2 Prior research 2.1 Evidence The empirical software engineering research community values the collection, analysis, assurance and use of evidence, and argues that the practice of software engineering should be based on, or informed by, evidence [3]. Evidence is a difficult concept to define. Simply defined here (we discuss it in more detail in section 3.3), evidence is a concept of relation: A is evidence of B. For example, a failure in a software module is evidence of a fault in that module [5]. Software engineering research has considered the classification and ranking of evidence [6], the combination and synthesis of evidence from different sources using statistical [7] and non-statistical approaches [8], and the description of different aspects of evidence [6]. Classification, ranking, syntheses and description of evidence are all activities that, by definition, work with information that has already been defined as evidence: such evidence is then classified into types of evidence, ranked or graded, synthesised with other evidence, and described for its qualities. These classifications etc. tend to be applied at the level of the findings of a study and therefore tend to be applied ex post facto. Being ex post facto study-level classifications they are more applicable for secondary studies, such as Systematic Reviews [9], that work with the findings from primary studies. The use of these classifications etc. becomes problematic during the conduct of primary research where one is evaluating items of information to determine whether those items should subsequently be treated as evidence. Evaluating items of information is pertinent when one is examining statements from practitioners, such as from interviews and blog posts. A set of evidential criteria would help evaluate whether items of information present in a practitioner’s report may be treated as evidence. We develop such criteria as part of our preliminary conceptual framework in section 3.

2.2 Sources of evidence In previous research [10] we recognised that practitioners most valued information that was provided by other practitioners, with the ideal type of information being information sourced from a local expert.



3

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

More recently Devanbu et al. [11] conducted a large survey at Microsoft, asking the 564 respondents for their opinions relating to claims about software engineering. The respondents were also asked to rank the sources of information that influenced the formation of their opinions. Respondents ranked sources in the following order (from highest to lowest): personal experience, peer opinion, mentor/manager, trade journal, research paper, and other. The source of information therefore appears to have an important role in influencing practitioners’ opinions and beliefs. Devanbu et al. [11] also found that a practitioner’s beliefs do not necessarily correspond with actual evidence from the respective project in which the practitioner is currently involved. This finding highlights a significant challenge for research that uses practitioners as the source of evidence: distinguishing those practitioner’s opinions that are based on her or his respective immediate personal experience from those opinions that have been formed from other sources. We are interested in practitioners as a source of information and in the reports that they naturally produce. We distinguish between a practitioner’s primary information (i.e. information that is clearly informed by the practitioner’s personal experience) and a practitioner’s secondary information (i.e. information that is not clearly informed by the practitioner’s personal experience and which may instead be formed from indirect sources, such as peers). With naturally produced reports, there is the task of evaluating both the source of the information and the content of the information to determine whether that information (or part of it) can and should be treated as evidence. Schum writes, “… what we believe we know about the credibility of a person who gives us testimony is often at least as inferentially important as what this person tells us.” (p. 108) We are particularly interested in evidence that is based on primary practitioner information as this, in principle, is evidence most closely connected with observation. Once we have established such evidence we can then relate it to beliefs. Inference and argument provide the links between information, evidence and belief/opinion. The relationships are illustrated with Figure 1.

Figure 1 Relationship between practitioners’ sources of information, evidence and beliefs

The relationships in Figure 1 are a simplification. For example, secondary information may influence the way that the practitioner chooses to interpret personal experience, and prior personal experience may influence the way that the practitioner chooses to interpret current personal experience. The arrows in the figure signify inference.

2.3 Events, explanations and stories Primary practitioner information is based on real situations (ideally contemporaneous situations), and how those situations unfold over time. Situations may occur at different levels of abstraction e.g. Curtis et al. [12] distinguish different layers of behaviour. Researchers studying software development have used a number of phrases to refer to



4

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

real situations that unfold over time: chronology [13][14], evidence based timeline [15], [16], project history [17], project memory [18][19], and narrative [20]. A defining element of these unfolding situations is the event. Several researchers have developed approaches to detect and analyse events e.g. [21]–[23]. Post-mortems are one practitioner-based method for reviewing previous events [24]. Other researchers have proposed methodologies for investigating events retrospectively (such as the retrospective case study [25], [26] and the longitudinal chronological case study [13]) and contemporaneously (such as ethnography, participant observation, and think-outloud protocols). Miles and Huberman’s classic handbook [27] suggests techniques for working with event-based qualitative data. In recent years, software engineering research has directed more attention at explanations within the context of theory and theory-building e.g. [28]–[30]. Drawing on Gregor’s critical review [31], Johnson et al. [30] state that most theories (explanations) have three characteristics: they attempt to generalize local observations and data into more abstract and universal knowledge; they typically represent causality; and they typically aim to explain or predict a phenomenon. All three characteristics relate to events. Simply defined here, an explanation is a predictive statement of why the events occurred [32]. Within software engineering research, it is not clear how much research has been undertaken to understand the practitioners’ explanation of events, in contrast to researchers gathering practitioner’s descriptions of their projects; in other words, what predictive statements of why events occurred are put forward by practitioners. Sharp and her colleagues [33]–[36] have used ethnography to investigate the culture, beliefs and behavior of software practitioners. Complementary approaches are ground theory (e.g. [37], [38]) and anthropology [39]. Passos et al. [40], [41] used ethnographic case studies to examine the relationship between practitioners’ beliefs and their software practices. Later in this paper, we develop a preliminary framework that uses practitioners’ accounts of event-based real situations. We treat these accounts as factual stories that contain a (partial) description of events together with a (partially) explicit or implicit explanation of those events. We are interested in how practitioners’ explanations and evidence contrast with researchers’ explanations and evidence; and how practitioners argue in relation to stories, e.g. the conclusion – a belief – that they infer from the personal experience.

2.4 Argumentation in software engineering research Arguments may be understood as a type of persuasive reasoning with a particular structure: a structure of assertions that, through inference, are intended to support a conclusion. Software engineering research has previously considered the use and evaluation of arguments in three main areas: assurance cases in safety critical systems, the use of argumentation in relation to theory development, and the teaching of Evidence Based Software Engineering (EBSE). Of these areas, the most developed is assurance cases in safety-critical systems e.g. Nair et al. [42], [43] report a Systematic Review (SR) of existing techniques for safety evidence structuring and assessment. It is the other two areas – argumentation in theory building, and in teaching EBSE – that are most relevant to the current paper. Software engineering research has also considered argumentation in relation to the development of software engineering theories. Several papers [28], [32], [44] have promoted the importance of theory, or have explored how theory supports analytical



5

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

generalisation. Jørgensen and Sjøberg write ([28], p. 6) that revisions to a theory depend on our ability to provide a robust argument that includes the use of results from other studies and, in some cases, the resolution of previous contradictory results. Similarly, for Sjøberg et al. [32] theory development requires significant reflection and skill in argumentation. Hannay et al. [44] analyse in detail a number of arguments relating to the intentional study of artificial situations. Software engineering research has considered argumentation in relation to EBSE and in particular teaching EBSE to students. Jørgensen [45] includes an appendix to his paper that discusses the Toulmin [46] model of argumentation. Rainer et al. incorporate aspects of argumentation when teaching Evidence Based Software Engineering [47], [48]. Brereton [49] reports her experiences of teaching Evidence Based Software Engineering to computing students. Whilst there is no explicit reference to argumentation in her publication, one would expect EBSE students to consider arguments in their analyses. The most widely used model of argumentation in software engineering research appears to be Toulmin’s [46] model of argumentation and, as noted, Jørgensen [45] provides a brief summary of that model. We consider Toulmin’s model later in this paper.

2.5 Related work from other disciplines The most developed and sophisticated understanding of information, evidence, inference and belief is probably found in law [6]. We therefore draw on a body of highly-cited, coherent argumentation research, within the context of law, to inform the development of the framework and the methodology. We summarise the influences on our framework and methodology below. For the development of the conceptual framework, we draw primarily on Schum’s book The Evidential Foundations of Probabilistic Reasoning [50], together with Anderson et al.’s Analysis of Evidence [51] and Twining’s Rethinking Evidence [52]. Schum stands at the forefront of the emerging discipline of a ‘science of evidence' [53], and Schum, Anderson and Twining have all published seminal work on argumentation within the context of law and legal thinking. (Within software engineering research, Pfleeger has published a series of papers [54]–[58] that cite Schum’s work, however she has not considered Schum’s work from the perspective we do here.) We complement Schum et al.’s ideas with Walton’s work on argumentation, in particular his books Argument Evaluation and Evidence [59] and Argumentation Schemes [60]. Walton has published extensively on reasoning, argumentation and rhetoric, and his work has contributed to preparing legal arguments and to artificial intelligence. We further complement Walton’s work with work by Bex [61] and her colleagues [62], as Bex et al. have a particular interest in stories and the integration of stories with arguments. Schum, Anderson, Twining and Walton all explicitly recognise Toulmin’s work on argumentation, and we draw on Toulmin’s ideas, particularly from his book The Uses of Argument [46]. For the development of the methodology, we first drew on Fisher’s book, The Logic of Real Arguments [63] to develop an initial method for identifying, extracting and reconstructing arguments. Fisher’s method complements other methods, such as those of Scriven [64], Hughes [65], Thomas [66], and Toulmin et al. [67]. Each of these authors brings a complementary perspective on the identification, extraction, construction and evaluation of arguments. For example, Fisher is interested in evaluating sustained “theoretical” arguments (arguments that can be analysed in isolation, without relying on empirical evidence) whilst Thomas is interested in reasoning for practical decision making. We aspire to develop a methodology that accommodates the various methods of these authors, as our longer-term aim is to use the conceptual framework and



6

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

methodology to analyse the arguments, evidence and explanations of researchers and practitioners.

2.6 Summary We have recognised the importance of event-based, direct personal experience of practitioners; that practitioners use such experience as evidence; that practitioners form explanations relating to that evidence; and that practitioners argue for and with evidence and explanations. We have also recognised that software engineering research does not appear to have studied practitioners’ conceptions of evidence of events, explanations of events, and arguments based on experience-based evidence and explanations. We have recognised a body of work in law and argumentation studies that contribute to our development of the preliminary framework and methodology. We explore these issues further in subsequent sections of this paper.

3 Preliminary conceptual framework 3.1 Overview to the conceptual framework

Figure 2 General research process

Figure 2 presents a research process for investigating software engineering practice using reports naturally produced by software practitioners. The process suggests a flow of information, from the practitioner’s information about the situation/s of interest through to the production of research outputs, and then feeding back into the situation. There are two standpoints, or perspectives, to a situation of interest: the situation as it interests the practitioner and the situation as it interests the researcher. We return to this distinction in standpoints. Figure 1 can be considered to exist within the ‘cloud’ presented in Figure 2.

Figure 3 A model of argumentation



7

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Figure 3 presents a simple model of argumentation comprising information, evidence, propositions and inferences. In this model, the arguer infers evidence E from information d, and the probability of proposition P from evidence E. The model is based primarily on concepts from Schum [50], with additional input from Anderson et al. [51], Twining [52] and Toulmin [46]. The different styles of the arrowheads in the figure are intended to highlight different applications of inference. Inferences are used to support other inferences e.g. an inference from evidence E to proposition P is warranted by a generalisation G, the generalisation itself being a kind of inference based on a backing, B. (See Toumin [46], [67] and Jørgensen [45] for more information on generalisations. warrants and backings.) Taken together, Figure 1, Figure 2 and Figure 3 suggest that research progresses by chaining together inferences from different types of information, from information as some kind of raw data to evidence to propositions (and onto structures of propositions, such as models and theories). Figure 3 is intentionally simplified. Figure 3 treats an argument as a single linear chain of inference from information through proposition. Actual arguments – chains of inference – would often be multi-legged, for example with multiple items of information independently or collectively inferring to one or more items of evidence, and multiple items of evidence independently or collectively inferring to one or more propositions. There may also be interim propositions that lead to a final or ultimate proposition e.g. a chain of inferences from P1 à P2 à P3 …PF. Schum [50] explores extensively the various configurations of chains of inference. Figure 3 also implies that only forward-inferencing occurs. There can also be backward reasoning, also known as abductive reasoning, or inference to the best evidence (IBE)[59]. Figure 3 is intended to accommodate these other types of inference. To explicitly represent the different types of inference in the figure would complicate the figure.

3.2 Defeasible reasoning Walton et al. write, “A defeasible argument is one in which the conclusion can be accepted tentatively in relation to the evidence known so far in a case, but may need to be retracted as new evidence comes in. A typical case of a defeasible argument is one based on a generalization that is subject to qualifications.” ([60], p. 2). Our position is that both software engineering researchers and software practitioners must inevitably make defeasible arguments e.g. because of the constraints on research and constraints on decision making in practice, and because of the complex, often invisible, and always-changing nature of software practice. Walton et al. [60] state that common forms of defeasible arguments, for example expert opinion, were long categorised in logic textbooks as fallacious. Although such reasoning may often be fallible, it is not always wrong, and such reasoning is very often the predominant or even the only kind of reasoning available for our decision-making. Walton et al. write, “… it is not helpful to condemn such [expert] evidence as fallacious. Rather the problem is to judge in specific cases when an argument from expert opinion can properly be judged as strong, weak or fallacious.” ([60], p. 2). A motivation for the development of the framework and the methodology, and also the use of argumentation schemes (see section 4), is the recognition that the software engineering research community would benefit from a greater appreciation of the range of reasoning and knowledge that practitioners use; that research can contribute by helping practitioners and researchers better judge the cases where such reasoning and knowledge is strong, weak or fallacious; and that research can seek to strengthen weak reasoning, and help to expose and reject fallacious reasoning.



8

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

For Walton et al. [60], there are three ways to attack, and therefore judge, a defeasible argument: attack a premise (cf. d or E in Figure 3), attack an inference (cf. G or B in Figure 3) or present a counter-argument. A consequence of defeasible arguments is that information may be missing from the stated argument e.g. one or premises may be missing, the conclusion may not be explicitly stated, an inference may not be stated. This makes such arguments easier to attack and to prematurely dismiss. We should therefore seek to employ the Principle of Charity [63]: when evaluating an argument, first interpret the argument in its most robust, persuasive form before evaluating the argument. The purpose of the Principle of Charity is to remind the evaluator to seek to evaluate the best available argument, rather than the argument that is easiest to criticise and dismiss.

3.3 Information, and criteria for evidence At least some of the information flowing through the research process should, at some point, become evidence. It is difficult to define the concept of evidence; and without a definition of the concept of evidence, it is difficult to determine how and when information becomes evidence. Schum reviews definitions of the concept of evidence, concluding that, “When all is said and done we may not be able to define the word evidence so that everything acceptable in all recognised disciplines is included and everything else is excluded…” ([50], p. 21). Twining writes that: “‘Evidence’ is a word of relation used in the context of argumentation… A is evidence of B… In that context information has a potential role as relevant evidence if it tends to support or tends to negate, directly or indirectly, a hypothesis [proposition]...” ([52], p. 441; emphasis added) And Schum writes again: “A datum becomes evidence in a particular inference when its relevance to this inference has been established… The relevance of any datum has to be established by cogent argument.” ([50], p. 20; some emphasis removed for simplification) Our position in this paper is that the arguer (which may be practitioner or a researcher depending on the circumstance) chooses what information to treat as evidence and what information to discard; and that the arguer has the obligation to argue for her or his choice of evidence. Drawing primarily on Schum’s review, we propose in Table 1 a preliminary set of evidential criteria for evaluating when information can be used as evidence. The table includes an additional item, standpoint, which is not a criterion in itself but instead recognises that the chooser of evidence inevitably takes a particular perspective on the information. The table is intended to complement grading schemes, such as Wohlin’s evidence profile [6] that tend to apply ex post facto to a study’s findings. The criteria presented in Table 1 are intended to apply across different types of evidence and, in that sense, are “substance-blind” (cf. [50]). The substance of evidence can vary considerably, and argumentation research has developed argumentation schemes for different kinds of evidence. We discuss argumentation schemes in more detail in section 4. Schum [50]) discusses implications of “evidence-blind” evidential criteria in more detail. The criteria recognise the importance of events e.g. when considering the credibility of testimonial evidence.





9

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Table 1 Preliminary set of generic criteria for using information as evidence (based on [50])

The information has…

Explanation

Relevance

Information can be used as evidence if the information allows us to: • revise our beliefs about the likeliness of a proposition being true or false; or • revise one or more existing propositions; or • generate entirely new propositions ([50]; p. 71)

Competence of witness A competent witness is a person who could have made for testimonial evidence some relevant observation (or gathered relevant information) and who also understands what she or he has observed (what information she or he has gathered) ([50]; p. 109). Credibility of tangible evidence

Information can be used as tangible evidence if there is sufficient: • •

Credibility of testimonial evidence

Information can be used as testimonial evidence if there is sufficient: •





Inferential force

‘Chain of custody’ i.e. knowledge of how the information was generated ([50]; p. 99) Accuracy i.e. the degree of conformance to what the information represents ([50]; p. 99).

Observational sensitivity i.e. sufficient ability in the observer to discriminate the occurrence and nonoccurrence of the event of interest Objectivity i.e. sufficient ability in the observer to attend to the event itself and not be influenced by personal motivation or prior expectations. Veracity i.e. sufficient ability to truthfully communicate what events the observer believes to have occurred or not occurred ([50]; pp. 100 - 108).

Information can be used as evidence if the information bears on how much and in what direction we revise our probabilistic beliefs ([50]; p. 69).

The chooser’s perspective on the information Standpoint

There are three aspects to the perspective: • • •

What role am I taking? For example, am I a researcher or a practitioner? At what stage in what process am I in? What am I trying to do? ([50]; p. 72 – 74)

Similarly, the criteria presented in Table 1 give no indication of the granularity of the information. Information can occur at many different levels of granularity e.g. a measure of a line of code from a low-level programming language or a measure of the maturity of a key process area in a process maturity assessment.

10

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

3.4 Propositions and explanations In law, proposition P is often intended to be a conclusion about a particular situation e.g. that Miss Scarlet killed Professor Plum with the candlestick in the library (to borrow an example from the game Cluedo). The distinction between general propositions and particular propositions is relevant for understanding the differences in explanations sought by software engineering researchers and software practitioners. We have previously argued [10] that software engineering researchers tend to seek more generalised knowledge, and hence more generalised explanations, whilst software practitioners tend to seek more contextual, specific knowledge and explanations. Similarly, researchers tend to seek propositional knowledge whilst practitioners tend to seek practical knowledge. Figure 4 provides a simplified classification. The model presented in Figure 3 is intended to accommodate both inference to more general propositional knowledge and inference to more specific practical knowledge. Figure 1 and Figure 2 describe a transition of knowledge from the bottom-right corner of Figure 4 to the top-left corner of Figure 4.

Figure 4 Researcher and practitioner preferences for types of knowledge and explanation

3.5 Events and stories Schum [50], Anderson et al. [51], Twining [52], Walton [4] and Bex and Bench-Capon [59], [62] all recognise a relationship between arguments and stories. Twining [52] distinguishes between generalisations that are logically necessary in the context of rational arguments, and stories that are psychologically necessary in the context of human decision making. Twining and others recognise that stories seem to have a place within arguments, and Walton [4] develops and employs argument maps (essentially graph structures) that connect arguments and stories. Bex and Bench-Capon [62] consider how stories can be treated as evidence to support a conclusion, and propose an argumentation scheme (see section 4) to incorporate stories into arguments. For Bex, a story is a coherent sequence of events, often involving subjects, objects, outcomes and other attributes [61]. Bex is interested in the moral of a story, the value/s that the story promotes, and how a story is used to persuade. For Bex and Bench-Capon [62], people often persuade not by imparting facts and rules, but rather by providing an interesting and convincing narrative. Twining draws on Ricoeur’s [68] definition of a story as a narrative of particular events arranged in a time sequence and forming a meaningful totality. For Twining, the necessary elements of a story are particularity, time, change and connectedness between events (in which connectedness does not need to be causal). Twining [52] distinguishes between stories and scenarios: one narrates stories, but describes scenarios.



11

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Information in practitioners’ naturally produced reports that contain Bex’s elements of a factual story, or that contain Twining and Ricouer’s elements of a factual story, are hypothesised to be information that is more likely based on practitioner’s personal experience. Story elements that are then connected by argument to propositions suggest beliefs that are more likely to be justified by personal experience, rather than other sources of information (e.g. peer opinion, mentor’s opinion, trade journal, research). The preliminary methodology is designed to identify, extract and work with story-based information, arguments relating to that information, and propositions connected by arguments to stories.

3.6 Relevance of information Although they appear similar, Schum [50] distinguishes between the relevance and the inferential force of information. Relevance is concerned with the general sense we have that information may have some influence in revising the probability of a proposition, the substance of the proposition, or the generation of a new proposition. Inferential force is a more specific criterion that concerns the magnitude and direction of influence of information. Relevance concerns a higher-level judgment of whether a practitioner’s report (or part of it) may influence our beliefs, whereas inferential force concerns a lower-level judgment of how particular items of information specifically influence our beliefs.

3.7 Competence and credibility of practitioners’ testimony The starting point of the research process presented in Figure 2 is the practitioner, as it is the practitioner who is closest to software practice and the practitioner who gathers (and generates) information for use in the research process. Establishing both the competence of the practitioner and the credibility of that practitioners’ information are fundamental to the research process, as subsequent stages in the research depend on the quality of the information that is used as evidence. A significant difficulty for research is that practitioners do not necessarily naturally gather and generate the kinds of information suitable for research. One goal of the current paper is to investigate how researchers can identify reports that have been naturally produced by sufficiently competent practitioners and that contain sufficiently credible testimony. (We mean sufficiently competent and sufficiently credible in terms of the standards required by research, rather than implying anything more general about the competence and credibility of software practitioners.)

3.8 Credibility of tangible evidence The tangible evidence that we are concerned with here is the report naturally produced by the practitioner, however it is the testimonial content of that report that is of most interest to us. To establish the ‘chain of custody’ of the report, we need to know how the report came to be written and disseminated.

3.9 Inferential force The inferences that are made from the evidence would typically be based on accepted standards for that discipline or community. Software engineering research has made increasing progress in establishing acceptable standards. Where an inference isn’t based on accepted standards then further supporting argument is required to support the nature of the inference. For example, Seaman’s [1] paper on qualitative research can be treated as an argument for the acceptability of inferring evidence from qualitative data in software engineering research. Toulmin’s [46], [67] model of argumentation recognizes that a warrant provides a legitimate inference from data to claim, and that the warrant has a backing. In other words, there is an argument – the backing – to support the warrant.



12

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Twining writes that inferences are based on a generalisation: “Every inferential step from particular evidence to particular conclusion… requires justification by reference to at least one background generalisation.” ([52]; p. 334) Although an inference is based on a generalisation, such generalisations do not necessarily need to be statistical generalisations. Inferences may, as counter-examples, be logical or analytical. And generalisations can be often implied rather than stated explicitly.

3.10 Standpoint The situation of interest in Figure 2 can be perceived from two standpoints, or perspectives: that of the practitioner and that of the researcher. According to Schum [50], standpoints are important for three reasons. First, the practitioner and the researcher will perceive the situation differently and will therefore judge the relevance of information and evidence in different ways. Second, information considered relevant at one stage of the inferential process may be dismissed as irrelevant at another stage. For example, once a proposition has been rejected, the information and evidence relating to the proposition may become irrelevant. Finally, people with different standpoints have different objectives. For example, a practitioner is trying to understand the software project in which she or he works so as to act effectively in that project, whereas a researcher is trying to understand that same software project in order to better understand projects in general. When researchers engage practitioners in their research (e.g. to complete a survey) there is the challenge of relating the practitioners’ standpoint/s to the researchers’ standpoints. We begin to explore how to relate the two standpoints in this paper.

4 Patterns of inference: argumentation schemes 4.1 Overview to argumentation schemes Walton [60] and others have developed patterns of inferences known as argumentation schemes, to catalogue, structure and reason about different types of inference. (A more accurate phrase is probably inferential scheme, however we have remained with the accepted phrase used by Walton and others.) Each argumentation scheme is related to a type, or types, of evidence (which may or may not be empirical evidence). The general structure of argumentation schemes is given in Figure 5.

Figure 5 General structure of argumentation schemes (inferential schemes)

Walton et al. analyse over 60 argumentation schemes [60], and many other schemes have also been proposed. Walton et al.’s classification of argumentation schemes is presented in Table 2. Argumentation schemes are typically presented in syllogistic form, with a major premise, a minor premise and a conclusion, as shown in Table 3. Where the critical questions are included in the argumentation scheme, each question becomes an



13

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

additional premise in the scheme. In terms of Figure 3, an argumentation scheme provides the backing B for the respective generalisation G that supports the inference from evidence E to proposition P. Table 2 Summary of argumentation schemes

REASONING Deductive reasoning Deductive Modus Ponens Disjunctive Syllogism Reductio Ad Absurdum (etc) Inductive Reasoning Argument from a Random Sample to a Population (etc) Practical Reasoning Argument from Consequences



Argument from Alternatives Argument from Waste Argument from Sunk Costs Argument from Threat Argument from Danger Appeal Abductive Reasoning Argument from Sign Argument from Evidence to a Hypothesis Causal Reasoning Argument from Cause to Effect Argument from Correlation to Cause Causal Slippery Slope Argument SOURCE-BASED ARGUMENTS Arguments from Position to Know Argument from Position to Know Argument from Witness Testimony Argument from Expert Opinion Argument from Ignorance

APPLYING RULES TO CASES Arguments Based on Cases Argument from Example Argument from Analogy Argument from Precedent Defeasible Rule-Based Arguments Argument from an Established Rule Argument from an Exceptional Case Argument from Plea for Excuse Verbal Classification Arguments Argument from Verbal Classification Argument from Vagueness of a Verbal Classification Chained Arguments Connection Rules and Cases Argument from Gradualism Precedent Slippery Slope Argument Sorites Slipper Slope Argument



Arguments from Commitment Argument from Inconsistent Commitment Arguments Attacking Personal Credibility Argument from Allegation of Bias Poisoning the Well by Alleging Group Bias Ad Hominem Arguments Arguments from Popular Acceptance Argument from Popular Opinion Argument from Popular Practice

Argumentation schemes can be aggregated into larger argumentation structures, for example argumentation maps. We use maps in our illustrative examples later in this paper.



14

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

4.2 Argumentation scheme for expert opinion With our interest in practitioners as the source of information about software practice, Walton et al.’s [60] argumentation scheme for argument-from-expert-opinion is particularly relevant. Walton et al. present an argumentation scheme (Table 3) together with critical questions to evaluate the argument (Table 4). The scheme given in Table 3 is actually the simplest version of four increasingly sophisticated versions developed by Walton et al. The most sophisticated version of the argumentation scheme integrates the critical questions into the scheme itself, so that the scheme is complete in itself. We show the scheme and the critical questions separately here to highlight the contribution of each, and because each are relevant to our subsequent analyses. Table 3 Argumentation scheme for argument-from-expert-opinion (from [60])

Major Premise: Source W is an expert in subject domain D containing proposition P. Minor Premise: W asserts that proposition P (in domain D) is true (false) Conclusion:

Proposition P may plausibly be taken to be true (false).

Table 4 Critical questions for the argument-from-expert-opinion argumentation scheme (from [60])

#

Category

Question

1

Expertise

How credible is W as an expert source?

2

Field

Is W an expert in the field that P is in?

3

Opinion

What did W assert that implies P?

4

Trustworthiness

Is W personally reliable as a source?

5

Consistency

Is proposition P consistent with what other experts assert?

6

Backup Evidence

Is W’s assertion based on evidence?



4.3 Argumentation schemes for analogies and stories Bex and Bench-Capon [62] explore how stories are used in arguments from analogy and put forward an argumentation scheme for this type of argument (see Table 5) together with critical questions relating to that scheme (Table 6). Table 5 Scheme for argument-by-analogy

Major premise: Generally, case C1 is similar to case C2. Minor premise: P is true (false) in case C1. Conclusion: P is true (false) in case C2.





15

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Table 6 Critical questions for the argument-by-analogy scheme

CQ1 Are there respects in which C1 and C2are (too) different that would tend to undermine the force of the similarity cited? CQ2 Is A the correct conclusion to be drawn from C1? CQ3 Is there some other case C3 that is also similar to C1, but in which some conclusion other than A should be drawn? Bex [61] proposes an argumentation scheme that combines practical reasoning with argument from analogy. The scheme is shown here in Table 7. (Bex [9] did not explicitly present critical questions for this scheme.) Table 7 Argument scheme for practical reasoning based on a story

Major Premise:

Character x performs action A, which promotes (demotes) value V, and gets positive (negative) results [outcome O].

Minor Premise: I am in a situation as character x. Conclusion:

Therefore I should (not) prefer actions that promote (demote) value V.

Broadly speaking, stories can serve as evidence in their own right (e.g. argument-fromstory) or may form the basis of an argument-by-analogy. Stories and analogies can both be incorporated in the model in Figure 3 by treating them as evidence for a proposition P. The minor premise of an argumentation scheme becomes the evidence E, with the major premise of the argumentation scheme providing the generalisation to support the inference to proposition P. Such evidence might be broken into specific events or treated as an event in itself.





16

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Table 8 Other argumentation schemes relating to practical experience

Argumentation scheme

# : Question type: Question





Example Premise: In this particular case, the individual a has property F also property G. Conclusion: Therefore, generally, if x has property F, then it also has property G.

1. Is the proposition claimed in the premise in fact true? 2. Does the example cited support the generalisation it is supposed to be an instance of? 3. Is the example typical of the kinds of cases the generalisation covers? 4. How strong is the generalisation? 5. Do special circumstances of the example impair its generalizability?





Popular opinion General acceptance premise: P is generally accepted as true. Presumption premise: If P is generally accepted as true, that gives a reason in favour of P. Conclusion: There is a reason in favour of P.

1. What evidence, like a poll or an appeal to common knowledge, supports the claim that A is generally accepted as true? 2. Even if A is generally accepted as true, are there any good reasons for doubting that it is true?





Danger appeal Premise 1: If you (the respondent) bring about A, then B will occur. Premise 2: B is a danger to you. Conclusion: Therefore (on balance) you should not bring about A.

[Not specified.]





Distress Premise 1: Individual x is in distress (is suffering) Premise 2: If y brings about A, it will relieve or help to relieve this distress. Conclusion: Therefore, y ought to bring about A. Defeasible module ponens

1. Is x really in distress? 2. Will y’s bringing about A really help or relieve this distress? 3. Is it possible for y to bring about A? 4. Would negative side effects of y’s bringing about A be too great?

Verheij [69] has developed a defeasible version of the classic deductive modus ponens argumentation scheme. Defeasible modus ponens (DMP) allows for exceptions to the universal generalisation of modus ponens. We indicate the use of DMP in our analysis in section 6.

4.4 Other argumentation schemes relevant to practical experience A number of other argumentation schemes are considered in our illustrative examples. We summarise those schemes in Table 8. Notice that the argument from popular opinion is relevant to the conduct of surveys, a common practice in software engineering research: a survey finds that the majority of respondents report proposition P to be true and infers that there is reason to favour proposition P being true.



17

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

5 Preliminary methodology 5.1 Overview to the methodology The overall aim of our research is to investigate what practitioners’ naturally produced information can tell us about software engineering practice. From this aim, we derive the following objectives: 1. 2. 3. 4.

to evaluate the competence of the producer of the information (e.g. evaluating the degree to which the practitioner who wrote the report can be considered an expert witness); and to evaluate the credibility of the information (e.g. evaluating the relevance and inferential force of the stories and arguments present within the report written by the practitioner); to examine how credible information from a competent practitioner can be used as evidence and arguments in relation to the beliefs and explanations of practitioners; and to examine how the evidence, arguments, beliefs and explanations of software practitioners can then be related to findings from other studies of software engineering practice.

We need a methodology to help us achieve our research objectives. Methodologies take time to mature, evolve and diversify into variations as they are applied and researchers learn from the experience of application. We therefore present a preliminary methodology in this section, with more detail provided in the appendix. We used a similar, but much simpler, approach in our analysis of focus groups on software process improvement [10]. In section 6, we examine a small number of illustrative examples to demonstrate the application of the preliminary methodology and to help us better understand the challenges to achieving our research objectives.

5.2 Summary of the preliminary methodology We treat reports that are naturally produced by practitioners as a type of testimonial information, and we use argumentation schemes and argumentation maps to identify, aggregate and evaluate the content of reports for their evidence, inferences and beliefs. Figure 6 presents an overview to the methodology. The left-hand column of the figure labels the main stages of the methodology, the central column provides further detail on each stage, and the right-hand column indicates typical artefacts developed at each stage. There is considerable iteration to the methodology. The analyst of the source text/s first identifies one or more texts to analyse. From each of those texts, the analyst then begins to identify relevant excerpts from the text. And for each of those excerpts, the analyst begins to identify components of arguments, evidence and explanations. An excerpt may ‘produce’ zero, one, or more than one argument, item of evidence, or explanation. It is during the identification and construction stages that the analyst is able to properly determine whether an argument, item of evidence, or explanation is present. Once the individual arguments, evidence and explanations from an excerpt are sufficiently developed the analyst can then begin to integrate these into a more complete structure for that excerpt. This is where argumentation schemes fit in. We refer to this structure as an Argument-eXplanation-Evidence (AXE) structure. Once the series of excerpts have been analysed, the resulting AXE structure/s from each excerpt can be aggregated into an overall AXE structure. At each stage of the methodology, one or more artefacts are produced, such as marked-up excerpts, structured textual summaries of the components of the excerpt, and excerpt-specific argument maps.



18

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Figure 6 Overview to the methodology

The relationship between evidence, inferences, arguments and explanations is complex and this complexity may appear confusing. The complexity is due to the way in which argument, inference, evidence and explanation can work at different levels of abstraction, supporting each other. An arguer can argue that some data should be treated as evidence, and is therefore arguing for an inference from data to evidence. The arguer can also argue from that evidence to a conclusion. An arguer can therefore argue for evidence and with evidence. Similarly, the arguer can argue for and with explanation. The stages identified in Figure 6 have been further decomposed into steps, summarised in the appendix.

5.3 Identification of components and excerpts To identify components of arguments, evidence and explanations, the analyst should be sensitive to the presence of specific words or phrases in the text, taking account of the context. Previous scholars of argumentation have identified various indicator words. For example, the word “therefore” suggests the presence of a conclusion and consequently a potential argument, the word “cause” suggests an explanation, and proper nouns and events suggest stories. Some words, such as “because”, may suggest an argument or an explanation. We demonstrate these indicator words with our examples in section 6, and provide a sample of indicator words in the appendix.

5.4 Marked-up text We use a lightweight mark-up notation to mark-up excerpts, in order to start to identify elements of arguments, evidence, and explanations. This mark-up notation needs to be extended in due course, both to handle additional concepts (context, emotive language) and additional elements of existing concepts e.g. additional attributes for stories, such as characters and story outcomes. Given the ambiguous nature of language, for example where the word “because” could indicate a cause in an explanation or a reason in an



19

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

argument, the mark-up will need to be flexible enough to handle concurrent tagging of chunks of the text. We present examples of the mark-up in more detail when we discuss the illustrative examples from the practitioner.

5.5 Representing evidence, arguments and explanations The methodology represents arguments, stories and explanations in two ways: • •

Using a textual, linear representation based on a template. Table 9 presents the template. Using an argument comprising undirected and directed graphs (see, for example, Figure 7). Argument maps are used at the level of individual arguments, and at the level of aggregated arguments. We discuss argument maps in more detail when we analyse the practical examples. Walton describes argumentation maps in detail in his book [59]. Table 9 Template for representing arguments, stories and explanations

Section Marked-up excerpt Structured textual representation of argument, story and explanation Argument Evidence e.g. story, analogy Explanation Argumentation scheme Main beliefs/conclusions abstracted from the argument or story. The textual representations are intended as interim representations and, due to space constraints, we do not present examples here.

6 Illustrative examples 6.1 Overview to the examples To illustrate the preliminary framework and methodology, we analyse information from one blog post, entitled Language Wars [70], published by Joel Spolsky to his Joel on Software blog. We use excerpts from one blog post to allow a coherent, simple and concentrated focus on the detail of the excerpts. If we used excerpts from several blog posts, we would need to provide background on each blog post, and this risks fragmenting the examples. In future research we will look at excerpts from other blogs. In the blog post, Joel Spolsky provides advice to readers on what technology stack/s to use to develop business-critical, enterprise-level Web applications. Spolsky's advice includes advice on technologies not to choose. The full blog post is approximately two A4 pages in length, and comprises 1,300 words over 20 paragraphs and 90 lines of text. We identified and analysed several excerpts from the blog post. Due to space constraints, we present four excerpts here. As the opinions expressed by Joel Spolsky are the opinions of an expert we first evaluate that expertise using the critical questions of the argument-from-expert-opinion argumentation scheme. We then individually examine each excerpt. Due to space constraints, we do not consider the critical questions of each of the other argumentation schemes in each of the illustrative examples.



20

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

We chose a blog post that is relevant to software engineering research as it focused on technology stacks and technology adoption. The blog post is dated (a decade old) however our interest in the blog post is concerned with the way that the practitioner argues, and uses evidence and explanations with those arguments, rather than with the contemporaneousness of the blog post itself.

6.2 Argument from expert opinion: the expertise of the blog writer The opinions expressed by Joel Spolsky in his blog can be evaluated using the argumentfrom-expert-opinion argument scheme. We start to ‘populate’ the scheme in Table 10. In Table 10, we leave proposition P unspecified as the subsequent excerpts, in sections 6.3 – 6.6, each contain instances of proposition P. The critical questions for the argument are partially presented Table 11. Assessing the expertise of a blog writer will likely require information beyond the blog post itself1. Table 10 Argumentation scheme for argument from expert opinion (from [60])

Major Premise: Joel Spolsky [source W] is an expert in developing business-critical, enterprise-level Web applications [subject domain D] containing [proposition P]. Minor Premise: Joel Spolsky asserts that proposition P in developing business-critical, enterprise-level Web applications [subject domain D] is true (false) Conclusion:

Proposition P may plausibly be taken to be true (false).

Table 11 Answers to the critical questions for the argument from expert opinion

#

Category: Question

Brief answer to the question

1

Expertise: How credible is W as an expert source?

The blog writer has many years’ experience as a software developer, and has (at the time of the blog post) been blogging about software development for over six years.

2

Field: Is W an expert in the field that P is in?

Yes, as the blog writer has developed web applications, and has also written about web applications, for many years. Spolsky created the project management software Trello, was a Program Manager on the Microsoft Excel team, worked for Viacom and Juno Online Services, founded Fog Creek Software, and co-launched the Stack Overflow programmer Q&A site.

3

Opinion: What did W assert that implies P?

The blog writer makes a number of assertions in each of the examples considered in sections 6.3 – 6.6.

4

Trustworthiness: Is W personally reliable as a source?

It is difficult to assess the personal reliability of the blog writer, however the blog writer is a well-known public figure in his field.

5

Consistency: Is P consistent with

This would need to be explored further

1 In Joel Spolsky’s case see, for example; https://en.wikipedia.org/wiki/Joel_Spolsky



21

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

6

what other experts assert?

through analysis of other experts’ views, and analysis of empirical evidence. We briefly explore consistency in section 6.7.

Backup evidence: Is W’s assertion based on evidence?

W’s assertions are based on many years’ professional experience as a software developer. For this analysis, his experience is expressed as stories, analogies and examples.



6.3 Example 1: analogy and rebuttal The marked-up text for the first example is presented in Table 12, with the accompanying argument map presented in Figure 7. Two argument structures can be identified in the text: a brief argument about a company called 37 Signals and its use of Ruby on Rails, and a rebuttal argument about Ruby on Rails not being a safe choice. Table 12 Example 1: 37 Signals (with mark-up notation)

Excerpt So while < REASON ID=1 Ruby on Rails is the fun answer> and , and , < REASON ID=2 that's not a safe choice for at least another year or six>. < REASON ID=3 I for one am scared of Ruby> < REASON ID=4 (1) it displays a stunning antipathy towards >

and < REASON ID=5 (2) it's known to be slow>, ...

The first structure can be treated as an (incomplete) argument from analogy (denoted A in the figure), or argument from story (denoted S|A): there is the implication that if your situation, or story, is like 37 Signals, you should use (or consider using) Ruby on Rails. The second structure, the rebuttal argument, is presenting a counter-argument that comprises three argument structures: a general defeasible module ponens (DMP), argument from distress (D), and argument from danger appeal (DA). We emphasise again that the argument structures used in the examples are illustrative. Reason R1 is not included in the argument map to simplify the map because R1 is not relevant to the arguments being considered here.



22

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Figure 7 Argument map for 37 Signals example

The two-event story presented in the excerpt does not include any explicit causal information, however the ordering of events might imply to the reader a causal relationship: using Ruby on Rails ‘causes’ (in some way) 37 Signals to make lots of money. The blog post only implies the main conclusion to the argument: do (not) use Ruby on Rails for developing critical web applications. This conclusion is a generalised practical proposition. Formally, the proposition might take the form: in situations of type S, for applications of type A, do not use Ruby on Rails.

6.4 Example 2: story, explanation and rebuttal The marked-up text for the second example is presented in Table 13, with the accompanying argument map in Figure 8. Three argument structures can be identified: an argument from analogy, or from story, (denoted A|S in the figure) that includes a simple causal explanation (denoted X1); a rebuttal of the causal explanation based on a negative version of argument from Popular Opinion (denoted !PO in the figure) argument; and a rebuttal of the original argument through a re-interpretation of the analogy (denoted A|S). The practitioner therefore attacks a premise and presents a counter-argument. Table 13 Mark-up of example 2: Paul’s story

Oh and I know that and then , < INFERENCE but> honestly < REASON ID=6 only ever believed him> and, < REASON ID=7 ,

For the explanation presented in the story, it is a complex explanation involving experience, knowledge and the implication of practical expertise. There is also the

24

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

possibility of a causal explanation. With the text available we can only speculate, however on whether the author intends to imply a causal explanation.

Figure 9 Argument map for the Copilot internship

The main conclusion to the argument is, once again, only implied: experience in a technology leads to better products. This conclusion is a generalised proposition, but not explicitly practically-oriented.

6.6 Example 4: argument from example The marked-up text for the final example is presented in Table 15, with the accompanying argument map in Figure 10. In this example, there is no explicit story, as there are no explicit events or changes in state. This text may be understood as an argument from example (denoted E in the map) describing a particular case or situation, together with the key phrase, “[the] in-house language [was] written by one of our best developers” (emphasis added here). The main conclusion to the argument is, once again, only implied: expert developers produce better tools/products. In this example, the better product tool is a very advanced, functional-programming dialect of Basic. The final example is also relevant because it provides information on the domain, S, in which the blog writer is an expert. An evaluator could then consider the degree to which the case presented helps answer the critical questions for argument from expert opinion (cf. Table 11).





25

Uncorrected manuscript accepted for Information and Software Technology (IST) Published version will be available at: http://dx.doi.org/10.1016/j.infsof.2017.01.011

Table 15 Example 4: the Wasabi language

# 12

Excerpt