Navigating the methodological mire: practical ...

4 downloads 0 Views 325KB Size Report
Hans-Georg Gadamer (in Boyne 1988). The methodological mire. The field of design research is a marketplace of methodological1 diversity. It isn't just that.

Navigating the methodological mire: practical epistemology in design research Ben Matthews, The University of Queensland Margot Brereton, Queensland University of Technology There is no method that can help us to make reasonable use of methods. Hans-Georg Gadamer (in Boyne 1988).

The methodological mire The field of design research is a marketplace of methodological1 diversity. It isn’t just that there are many different research methods in the field, but that there are many different methodological ‘foundations’—principles for how research can and should be done, for how contributions to knowledge ought to be pursued. This is not particularly surprising given how broadly ‘design’ is often conceptualised. Herbert Simon (1996) suggested “the proper study of mankind is the science of design” (p.138); Nigel Cross (Cross 2006) identified a principal difference between humankind and the animal kingdom to be our ability to design. In light of such encompassing notions of design it is difficult to imagine how the field of design research should exhibit any less diversity than that found within and among the arts, humanities, the human and the social sciences. A cursory survey of design research can easily identify rich traditions of design inquiry along each of these different lines. For example, some strands of design research are conducted as design

1 Technically, ‘methodology’ refers to the study or analysis of methods and their underlying

principles. However, the term suffers from frequent (mis)use as roughly synonymous with ‘method’. This is exacerbated by the fact that in English, ‘methodological’ is an adjective used for both nouns ‘method’ and ‘methodology’ (Merriam-Webster 2012). Phrases such as “methodological error” commonly refer to, e.g. an error pertaining to the application of methods, rather than to the study of methods. Clearly, this adds to the confusion. In this chapter we use ‘methodology’ to mean the analysis of methods, but there is not an acceptable alternative to ‘methodological’, which we predominantly use in its ‘pertaining to methods’ sense. The ‘methodological mire’ is the bog concerning methods, their choice, and their foundations, not a mire concerning their study per se.

history; Boyd-Davis’ (2012) exposition of the evolution of the timeline as an artefact of graphic design is a recent example. Design research is also pursued in the form of design critique, where various extant artefacts are analysed to identify salient aspects of their form, function, interactivity and the like. Bardzell et al. (2010) is an example of a form of design criticism applied to the design of interactive systems. Research in design is also conducted through experiments, of the kind that have been developed in psychology, where test subjects are asked to perform certain tasks under various experimental constraints as a means of empirically testing hypotheses. For instance, interested in whether ‘novel’ designs are more aesthetically pleasing than ‘typical’ ones, Hekkert et al. (2003) devised experiments to explore the relationship between the perceived originality and/or conventionality of the industrial design of artefacts against perceptions of their aesthetics. Other research takes a more naturalistic approach informed by ethnographic sociological or anthropological methods, in order to study e.g. how practicing designers accommodate ambiguity and disagreement in interaction (McDonnell 2012), or the ad hoc criteria they rely on in making practical decisions (Lloyd and Busby 2001). A growing proportion of design research is conducted in a manner in which designing is a component of the research method. The subfield of participatory design is a visible body of work that has been undertaken (primarily) through the engagement of the researchers themselves in collaborative design events and activities, often in the mould of action research (e.g. Bødker and Buur 2002). Some design research has no empirical component, relying instead on conceptual arguments, analogies and forms of logical reasoning, as Liddament (2000) lucidly does. This is a very small sample of design research, and yet the breadth of disciplinary allegiances and methods on display is staggering. History, criticism, psychology, sociology, anthropology and philosophy are represented in this spectrum, as are documentary methods, critical analyses, experimental procedures, statistical analyses, interviews, ethnographic “thick” description, interaction analyses, action research case studies and conceptual analyses. And this ignores the various general philosophical stances that may be implicit to some of the work above such as social constructivism, cognitivism, pragmatism, post-positivism, hermeneutic phenomenology and others. Welcome to the mire of design research. Variety is not a bad thing of course, nor do we want to suggest that methodological diversity is a detriment to the development of design research as a discipline. But we do refer to this as a mire for several reasons. In our experience, this methodological situation can create multiple points of confusion for those preparing to conduct research in design. There are a number of questions that can seem to receive very different answers

depending on where (i.e. from which disciplinary, philosophical or methodological position) the researcher stands, such as: •

what is a good research question?

how should a study of design be designed?

what methods are valid?

how can knowledge about or for design be gained?

These are pressing practical questions for researchers. And lurking nearby is a set of related, thornier concerns: •

how much philosophical ‘baggage’ accompanies the choice of a research method? For example, do I need to assent to Heidegger’s Sein und Zeit before I can employ a ‘phenomenological’ method such as in-depth interviewing?

how is it possible to compare the results of research that have been advanced under different methodological programs?

are the results of carefully controlled experiments automatically at odds with descriptive naturalistic claims, simply by virtue of the fact they are derived using different methods originating from different knowledge traditions?

if I use quantitative methods of data collection and analysis am I ipso facto a ‘positivist’ (and thereby subject to the various criticisms of positivism that have been advanced in the past forty years)?

can I create my own method?

The parallel existence of many contrasting methodological approaches in the field can make it seem that selecting one among these alternatives automatically calls into question the legitimacy of many of the others (heightened by the fact that positions such as constructivism were largely formulated in opposition to positivism or cognitivism). These methodological concerns can be paralysing for novice researchers, because it can often seem that either one has to become an expert in comparative philosophy before one can begin to do research, or that one has to commit (blindly if need be) to one methodological tradition of research and work solely within it in order to safely navigate the mire. In this chapter we would like to propose two alternative options open to design researchers since there are presently few resources in design research that provide much guidance through this territory, nor are there many comparative discussions of method in our field.

Epistemology and foundationalism To begin, it is worth identifying how methods and philosophy are linked. Epistemology is the branch of philosophy concerning human knowledge and understanding. Epistemology considers what knowledge is, what counts as knowledge, how it is acquired, and the possible extent to which something can be known. In deciding a method for investigating a research question we are naturally drawn into considering philosophical, epistemological issues such as these. Nevertheless, our argument will be that design research can circumvent many of the thornier ones. The source of many of the concerns such as those mentioned above can be traced to assumptions about the nature of epistemology. Much of modern epistemology has developed in response to scepticism. Sceptical arguments originate from a persistent concern with the possibility of delusion. You may think you know something obvious, such as that you are currently sitting on a chair, but it may yet be the case that you are mistaken. Perhaps you are hallucinating or dreaming the thing you think you know. The classical response to scepticism has been to try to find an Archimedean point about which doubt cannot be raised, and to use that as a basis from which to (re)construct the rest of our knowledge of the world. The most celebrated such attempt was Descartes’ ‘Cogito’, “I think therefore I am”. Descartes’ ingenious response to scepticism was to identify one thing about which there could not be scepticism: scepticism itself. He accepted that he was able to doubt almost everything, but even so, the one thing he could not doubt was that he was just now doubting. Therefore he could be certain he existed (at least mentally). The details of his argument and the generations of philosophical responses to it are not important for our purposes, but the form of his response is important. Descartes produced a foundationalist response to scepticism, treating knowledge as though it is a kind of edifice, built up piece by piece from axioms that cannot be undermined, through to conclusions about other things—in his case, the existence of God, of an external world, of other minds like his, etc. The charter of much of the field of epistemology has been to defend knowledge from scepticism in such a way, starting from initial axiomatic positions on what is real, what reality consists of, what cannot be doubted, and on through to less obvious (but derivable) entailments of those foundational matters. (G.E. Moore’s (2002) proof of an external world is another well-known attempt in this general epistemological tradition.) Method, conceived as a formal procedure for the production of knowledge, becomes one of the things that is implicated by, and arrived at through, a stepwise progression from foundations. On a foundationalist view, it can seem that we must first agree on fundamental issues such as of what reality truly consists, how we can know what

is real, and what is to count as knowledge before we could possibly debate anything as derivative as e.g., what design research ought to be. Many of the methodological concerns identified in the previous section arise from an implicit view of our knowledge of the world as a kind of foundational edifice that is built up from fundamental ontological assumptions about the nature of reality and epistemological assumptions about how we can discover what reality is like. As long as this is our view of knowledge and method, those concerns are not easily dissolved. However, we suggest that foundationalism should be put to one side (at least temporarily) in order to see other ways through the mire.

Practical epistemology In the ordinary course of our lives, our ‘knowledge of reality’ is largely unproblematic. We can manage to find our way from home to the local shops without problem and we know where in the store to find rice to buy. It is only in particular circumstances (that are also very ordinary) that the status or certainty of this knowledge may become a matter of concern; for instance, when we have to explain to a guest how to get to the store, or describe in which aisle to find the rice. It is only while doing philosophy that we might ever need to apply a systematic and thoroughgoing method of doubt: we do not actually doubt that we have two hands while picking up a pack of rice off the shelf at the store. Indeed, systematic doubt would be detrimental to any course of practical action—any attempt to actually get something done. It is worth noting that in ordinary circumstances, doubt only takes place amidst a field of much else that is, and must be, taken for granted (c.f. Hughes and Sharrock 1997). A chemist doing lab research may doubt the rate of reaction of two chemicals that is measured by her equipment, but she cannot do so while also doubting the very existence of the chemicals in the beakers in front of her. Consider that in order to have a disagreement with someone about things such as the nature of reality, of knowledge, and of how we know what we think we know—we must tacitly agree with each other on an indefinitely large number of other things, including agreement on what it makes sense to say. The possibility of disagreement is predicated on intelligibility, among other things. This perspective paves a couple of new paths out of the mire. We do not need to think of differences between methodological traditions as running ‘all the way down’ to foundations upon which contrasting epistemological edifices are anchored. We can instead notice that outside of doing philosophy, the world (even the ‘hard’ scientists’ world) is one that is largely and necessarily taken for granted in order to conduct any business at hand,

including rigorous research. This brings into focus specific aspects of the research act that tend not to be foregrounded in foundationalism.2

Path 1: The purpose of the research In any research endeavour the researcher limits her attention to the investigation of specific aspects of the world. She has particular purposes in conducting research. These purposes are realised in a set of concrete decisions about what aspects of the world will be collected as data and presented as evidence of the phenomenon under investigation. Ordinarily, this is the job of a method. Method is not, on this view, a formulaic or automatic knowledge-production engine; its task is to offer the researcher a means of collecting, organising and presenting aspects of the world that are relevant to the purposes of the investigation. Furthermore, we need not scrutinise the philosophical foundations of the researcher’s methodological tradition in order to make sense of, or evaluate, the research. Indeed such a philosophical foundation may not be explicitly formed. In design research, epistemology becomes plainly visible in the actual decisions researchers have taken in order to show us slices of the world that illuminate a phenomenon of interest. We do not need to attend to foundational philosophical assumptions, but only to those assumptions that are relevant to the problem at hand. And the relevant assumptions are manifest in the evidence that researchers collect, assemble and display in producing research. How does this offer a way out of the mire? It makes methodological choice subservient to the researchers’ own purposes in investigating a phenomenon, rather than to an a priori philosophical position that (in the case of classical epistemology) rarely comes into contact with the kinds of practical decisions that need to be made in order to conduct research. It matters much less to which epistemological or methodological school is pledged allegiance; rather, researchers’ epistemological stances are revealed in and through their concrete decisions with respect to how they have used data as evidence of a particular phenomenon. Rigour in research does not stem from belonging to a particular philosophical tradition, and neither does meticulous adherence to the tenets of a method such as grounded theory. In this sense, rigour consists only of being able to show your peers that the material you have gathered is reasonable as a response to the purposes of the investigation, and of sufficient quality and volume to make a contribution to your particular scholarly audience. The playing field has been levelled here—every practising 2 Note that at least three major philosophical movements in the 20th Century have opposed

foundationalism in different ways, specifically strands within pragmatism, phenomenology and analytic philosophy following Wittgenstein.

researcher is a (kind of) practising philosopher; every piece of research, philosophical or not, can be read for the epistemological positions it embodies in its choices of data as evidence of its phenomena. In view are the research questions, aims, and foci, and how they relate to the data that has been marshalled in the research account.

Understanding how products affect people An example from industrial design research may help make this point. A prominent line of research in industrial design investigates the relationships between the forms of artefacts and people’s emotional responses to them. Perhaps the most ingenious attempt to create a language-independent cross-cultural comparative catalogue of the emotional responses generated by products belongs to Pieter Desmet. In a series of studies (Desmet et al. 2000, Desmet 2003) he developed and refined an experimental tool, called PrEmo, to gauge people’s emotional responses to the design of objects. Desmet’s work is a good candidate for demonstrating this particular path out of the mire, for several reasons. This research represents a particular tradition of design research informed by experimental psychology, with respect to both the theories and methods it employs. Desmet is particularly well read in the history of emotion research in psychology. So if we are curious to know the extent to which one must be within that disciplinary perspective in order to make sense of these research results, or in order to compare it to other research, this makes a good test case, as experimental psychology is just one among many disciplines (physiology, anthropology, sociology, philosophy) that investigate emotion. Desmet is also a particularly clear writer and a very careful scholar, making his work a pleasure to review. Our procedure is roughly: identify the research purpose and the phenomenon under study; examine the data we are shown as evidence of the phenomenon; consider the reasonableness of the evidence in relation to the purpose of the study (see Figure 1).

Figure 1. Beginning from the purpose of the research, one can judge the appropriateness, quantity and relevance of the data gathered for how reasonably it functions as evidence of the phenomena of interest.

Identify the purpose. In order to work through this path out of the mire, we must set aside the researcher’s disciplinary or methodological allegiance, and instead identify his or her purpose in conducting the research. In this case, Desmet’s aim in developing and deploying the tool has been to find answers to questions such as “What does this car make people feel?” as a means of investigating people’s emotional responses to a product’s (industrial) design. Examine the data. We are shown the results of studies in which respondents were presented a series of tasks. The task consisted of a picture of (in this case) an automobile. The vehicle was presented alongside cartoon animations of fourteen different facial and

bodily emotional responses: seven ‘positive’ (satisfaction, desire, fascination, admiration, pleasant surprise, inspiration, amusement), and seven ‘negative’ (unpleasant surprise, disgust, contempt, indignation, dissatisfaction, disappointment, boredom). Participants could score each emotion on a three-point scale, e.g. “I do feel this emotion”, “To some extent I feel the emotion”, and “I do not feel the emotion”. For each model of a car in the test, the task they were given was to “Rate the puppets [cartoon animations of the specific emotions] to express what you feel towards this car model”. Different makes of vehicle were comparatively assessed in this way. Consider evidence in relation to purpose. The purpose, to gauge peoples’ emotional responses to the design of products, is pursued by asking respondents to report on their feelings when shown a picture of a product. The reasonableness of this data with respect to this purpose depends on several premises that become apparent when one juxtaposes data with purpose like this. For example, for the data to be adequate evidence of the phenomenon of interest, a two dimensional picture of a product must be considered a sufficient stimulus to generate an emotional response. Other properties of a product such as its scale, finish, texture, and depth dimension (or other aspects like its ‘new car smell’, the design of its instrumentation, dashboard or sound system) are excluded from consideration by virtue of the data that was collected and the details of the task that was set the participants. Other practical methodological decisions also become noticeable. An emotional response must be considered something respondents can accurately introspect, identify, and report. The fact that respondents do not have the option to give a ‘no emotional reaction’ answer to the task implies that emotional responses are omnipresent, i.e. that in seeing a car respondents always feel something, and what they feel is a result of them seeing the picture of the car.3 These observations are not raised as criticisms or shortcomings of the work. Our point is only to show that in any research, irrespective of its philosophical orientation, the evidence it presents has effectively operationalized the purpose of the study in a very specific way that can enable its readers to see many of the philosophical and methodological assumptions that are embodied by the study. Taking this kind of practical perspective on methodology has an advantageous consequence in comparison to foundationalism. Once we see epistemology as an element of research visible in concrete methodological decisions (such as in what data to collect,

3 It does not appear that respondents were prevented from choosing not to perform the task (i.e.

they may have been able to elect to rate none of the puppets), but this is not an option visibly provided by PrEmo or encouraged by the visual design of the task.

and how it operationalizes the purposes of the study), we can have very specific conversations about the appropriateness of those decisions. Some researchers may not be convinced that our emotional responses to visual stimuli are always accurately introspectable4. Some may not be certain that viewing a two dimensional image and reporting on what you feel effectively captures the kinds of phenomena that they would like to see included in a study of “emotional responses to products”. The important thing is that such disagreements, should they arise, are not objections to entire disciplines (experimental psychology) or particular philosophical positions (e.g. cognitivism) or to general approaches to research (e.g. experimental reductionism). This conversation can be practical—about data, evidence, purposes and phenomena—rather than foundational. We will now look at a second way out of the mire.

Path 2: The claims of the research There is another closely allied path on offer, which begins from the claims (rather than the purposes) of a particular piece of research. In this respect, it is worth noting with Jacobs (1988), that “the rationality of any research enterprise is guaranteed not by some set of established procedures, but by a sensitivity to the nature of the claims being made, the burden of proof those claims impose, and the kind of evidence that can support those claims” (p.442). This is a valuable observation for the clarity it offers regarding the role of method. However it is also important to realise that in design research, there are many different kinds of claims. This is where the diversity of design research becomes starkly apparent. And different claims carry very different ‘burdens of proof’—standards for what kinds, and how much, evidence must be met in order to justify the claim. When we think in terms of burdens of proof, and when we look at the many types of research that are conducted in our field, further limitations of the foundational view of knowledge in offering much guidance for design research can be seen. It is not just that there are many different kinds of claims made in design research (e.g. empirical claims are different to methodological or conceptual claims), but that many design research claims are ‘soft’. There are few proofs in design research; most contributions are better seen as tentative, argumentative, persuasive, sensible or workable (c.f. Gaver 2012).

And indeed, this is one of several objections that Desmet has anticipated and discussed, and he has justified his methodological decisions in light of such concerns. We are unable to do justice to the depth of the methodological alternatives he has surveyed and thoughtfully considered in his development of PrEmo. Readers are directed to his original work, which stands as a model for the rigorous development of an experimental tool.


This is perhaps a situation that strict epistemologists might decry, as it would appear there are a great many things claimed by research as research that are genuinely not yet known. But this is the nature of (or at least the state of) design research as a field. This being the case, it is worth looking at the kinds of things that do count as design research, i.e. the nature of the claims advanced by researchers as contributions to the field. These are not anchored in an epistemological certainty that stands up to every conceivable doubt, but rather in the cogent presentation of argument and evidence that is, again, reasonable.5 Thus, our second path out of the mire is to look carefully at what researchers claim as the contribution of their research (or the contribution that they ideally hope to make). As readers, we hold that contribution up against the evidence and argument presented, and assess it as to its reasonableness. But additionally, we must also consider its value it in terms of its novelty, its innovation, the new perspectives and directions it opens to the field, its potential for constructive application, its topicality and its interest. This gives a complex twist to the idea of a burden of proof of a research contribution, and dovetails with the observation that many design research claims are ‘soft’, i.e. unproven or underdetermined by the evidence presented in their support. The point is that the demonstration of a research contribution depends on more than just evidence and argument, but also on things like the potential usefulness of the contribution to the field or the novelty of the perspective on offer in relation to the field’s prevailing programs of research. A ‘soft’ claim may yet be an important claim, not because it is unassailably true, but because it is fertile, generative or insightful within the field’s current zeitgeist.6 Furthermore, there are different types of design research claims. To substantiate some claims, a conceptual argument may suffice without the aid of a formal research method or any new empirical data. For others, the contribution is entirely bound up with the patterns that emerge from an analysis of empirical data. For nearly all forms of research, however, the contribution is accomplished through sensitively balancing the strength of the claims made against the weight of evidence the authors have for that particular conclusion. Compare the work done by phrases such as “it was found that X”, “this strongly indicates that X”, “these results show that X”, “this suggests that X”, “we observed that X”, “given 5 For example, Schön (1983) does not provide a knockdown proof that ‘technical rationality’, the

idea that problem solving is made rigorous by applying scientific theory and techniques, is patently false. He marshals a set of examples from the professions, philosophy, education and elsewhere as a device for building a reasonable case that formalisms are not adequate to the task of addressing the problems that beset the professions. 6 Thus, when acting as reviewers for conferences and journals, we don’t only need to assess the merit of a research contribution against the evidence we have been shown, but also against the state of the art of the field and sometimes related fields.

situation W, X is often likely”. In each case a claim is made in a manner sensitive to the relevance and volume of material presented. The burden of proof is relaxed by adverbs such as “often”, or in choosing a verb like “suggests” instead of “demonstrates”. “We observed X” is the kind of claim that is very nearly indisputable (as it carries a very small burden of proof). This is not an oversimplification, nor it is merely semantic hair-splitting. Research papers present themselves as offering a contribution to the field of research to which they are addressed, and authors take pains to formulate their claims in ways that are sensitive to the strength of their evidence and argument. Some claims can be stated baldly because the volume or quality of evidence presented grants an author license to do so. Other claims can be stated just as unequivocally, but only because they impose a very light burden of proof on the claimant. However, many design research claims are expressed with some reservation.

Understanding why students get stuck in the design studio Working through an example reveals this path to be something akin to ‘reverse engineering’ a research contribution. Our procedure is roughly: identify the paper’s claim as to its contribution; consider the burden of proof of that claim; evaluate the extent to which the evidence we have been shown meets that burden of proof (see Figure 2).

Figure 2. Beginning instead from what the research has claimed, one can work backwards to determine the ideal burden of proof of the claim. This enables readers to generate local, studyspecific criteria to assess the appropriateness, volume and weight of the evidence that has been presented to support the claim.

The example we will take is drawn from design research in architecture. Sachs (1999) studied the phenomenon of design students getting ‘stuck’ in their studio projects. It works well as an example here because it comes from a different design discipline (architecture) than Pieter Desmet’s work, it employs different methods (open-ended interviews), and is informed by a different tradition of research (qualitative sociology). Identify a claim. When Sachs formulates part of her conclusion she does so in this schematic format: Phenomenon X is usually manifested in behaviour Y, with consequences Z, possibly caused by A, B, or C. In her own words (with our own schematic tags inserted),

Sachs (1999, p.209) writes “The breakdown [X] is usually manifested in the student’s behaviour [Y], often affecting the design process so that it falters or even stops [Z]. Possible causes of ‘stuckness’ were examined… such as the design project requirements [A], confusion over the design process [B] and misunderstanding the studio instructor’s intentions [C].” Consider the burden of proof. What is the burden of proof of a claim like this? Adverbs such as “usually” “often” and “possibly” moderate the burden. The task is to consider what one might need to be shown in order to accept (or be convinced) the claim is true. So we might expect to be shown several instances (since “often” and “usually” implicitly demand multiple cases) where students were identifiably stuck, where ‘stuckness’ is evidenced by the halt of progress in the design process. What about the “possible causes”? Since they are only presented as “possible” causes, we do not require more than a reasonable case that occurrences such as “confusion over the design process” are prior to, and seem relevant to, a breakdown. Weigh the evidence in relation to the burden of proof. In Sachs’ account, we find two detailed descriptions of students stuck in their process, Miguel and Dalia. We focus on Sach’s account of Dalia, which is presented in an ethnographic style punctuated with direct quotes taken from interviews in which Dalia explained the origins of her ‘stuckness’.7 It emerges from the account that Dalia identifies a number of possible different sources of the design stagnation she experienced. For example, Dalia acknowledged that she had neglected structural aspects of her solution that were important. She became stuck trying to reconcile those structural requirements with the overall design concept she was committed to developing. This is the basis presented for identifying the design project itself as a possible cause of stuckness—students encounter such dilemmas as constituents of trying to resolve the design problems they are given. Miguel’s and Dalia’s examples are introduced as elements of a larger data set that Sachs references in passing (e.g. “Two students who participated in the study reported that they were stuck because they did not have an image of the next step of the process” p.207, or “A student reported that she had abandoned her scheme because she felt it was ‘boring’.” p.208). Although we are not shown very many instances of the phenomenon under study, the material we are shown

7 As we have done earlier with Desmet, we must similarly acknowledge that we are underselling the

value of Sachs’ paper by rendering it in this fashion. Indeed, the claim of hers we are scrutinizing is not (in our own estimation) even the most important contribution of the paper, it is merely the most visible claim. Perhaps the most important contribution of Sachs’ study is in the way it qualitatively dimensionalises different phenomena of ‘stuckness’ and their causes, as those phenomena have been experienced and understood by the students themselves.

arguably meets the moderate standard of evidence entailed by the strength of the claim as it has been formulated. Sachs’ formulation of this claim reveals it to be ‘soft’ (i.e. tentative, inconclusive or provisional). But soft claims do not equate to unimportant ones, nor are they necessarily any less a contribution to design research. On the contrary, Sachs’ claim is responsible, nuanced and methodologically sensitive, given the weight of evidence afforded by her data and analysis. Ultimately, the validity of such a claim rests on how reasonable the relationships between her identified phenomena and explanations are, in relation to what we have been shown in the data. Specifically, we must accept it is a reasonable tack to ask students to identify and explain their own experiences of being stuck, to take those explanations as possible causes of ‘stuckness’, and to treat their descriptions of stuckness as accounts of the phenomenon of being stuck in a design process. Again, as before, validity does not rest a priori on her philosophical commitments or her research methods. Whatever her commitments might be, they are manifest in her methodological decisions— i.e. it is through her interview data that she has found examples of (and therefore ostensively defined) what she will count as ‘stuckness’, ‘breakdown’, ‘confusion over the design process’ etc. within the study. The possible causes of stuckness identified in the study are ordered types generated from students’ own reflections on why they got stuck and how they eventually got past it in their process. Those methodological decisions are there for readers to consider as to their appropriateness, sensibility and reasonableness. As before, this observation is not raised in order to criticise the research, but to show how basic methodological issues are revealed in researchers’ practical decisions regarding, in this case, what data will suffice as a “cause of ‘stuckness’”, how one might know it when one sees it, and what strength of a claim one can make on the evidence that has been collected. Naturally, readers may make an assessment based on their own knowledge and experience of ‘stuckness’ if indeed they have encountered this phenomenon themselves. And using their own experience as a reference they will judge whether the scope and scale of the inquiry has been sufficient and the methodological decisions adequate.

Practical Epistemology in Design Research The benefits of seeing epistemology as a practical affair that is handled in concrete decisions with respect to purpose, phenomena, data, evidence, claim formulations and burdens of proof relate to the methodological concerns identified earlier in this paper. It is possible to approach design research in ways that do not get bogged down at the outset in deep philosophical positions (see Figure 3).

Figure 3. An approach to pursuing considered design research that avoids the methodological mire, drawing on the two paths outlined above.

Hopefully we have made a reasonable case that the use of different research methods in design does not prevent us from being able to compare or discuss the results of research conducted under different methodological auspices. However, it does highlight the degree of care that must be taken with such comparisons. Researchers may cite many of the same sources approvingly and share a topic of investigation, but that offers no guarantees that they are studying the same thing, or that their results are comparable or cumulative. Research that measures emotions through self-assessment Likert scales are not investigating the same phenomena as research that measures emotions through people’s physiological responses (e.g. heart rate or skin conductivity). Concepts such as ‘emotional response’ or ‘stuckness’ are not phenomena that are just ‘out there’ waiting to be examined. Our concepts are transformed into phenomena by researchers who decide, for their present purposes, what slices of the world will be taken to count as an instance of ‘emotional response’ or ‘stuckness’. And those decisions are methodological and epistemological. No claim to be allied with a certain philosophical stance or

methodological position can serve as shorthand justification for taking a particular approach to study a phenomenon. We are all (researchers and citizens alike) epistemologically accountable, and nothing can spare the readers of research from the task of looking, seeing, thoughtfully considering and critically assessing the reasonableness of the decisions researchers have chosen to make. In this way, all design research is comparable, and the important comparisons can be made on the basis of surface features of a study: what kind of data, for what purpose, in support of what strength of claim. Important methodological differences do not need to descend ‘all the way down’ to figureheads such as Karl Popper and Paul Feyerabend. We are also hopeful that such an approach can lead to an abandonment of paradigm wars that result in dismissive treatments of other forms of research solely on the basis of allegiance. Instead such an approach might open dialogue about different methodological possibilities, lead to greater methodological transparency and more comparative discussions of the often-contentious issues that relate to philosophical paradigm and method. Design research is diverse, and there are no signs it is growing less divergent. In this chapter we have tried to outline two strategies to aid practicing researchers in making sense of, and navigating, our collective methodological plurality.

References Bardzell,J, Bolter, J. & Löwgren, J. (2010) Interaction criticism: three readings of an interaction design, and what they get us Interactions 17 (2), 32-37. Bødker, S., & Buur, J. (2002). The design collaboratorium: a place for usability design. ACM Transactions on Computer-Human Interaction (TOCHI), 9(2), 152-169. Boyd Davis, Stephen (2012) History on the Line: time as dimension. Design Issues 28(4), 417. Boyne, R. (1988). Interview with Hans-Georg Gadamer. Theory, Culture & Society, 5(1), 25– 34. Desmet, P.M. (2003). Measuring emotion: development and application of an instrument to measure emotional responses to products. In Blythe, M. Monk, A., Overbeeke, K. and Wright, P. (Eds.) Funology: from usability to enjoyment (pp. 111-123). Amsterdam: Kluwer.

Desmet, P. M., Hekkert, P., & Jacobs, J. J. (2000). When a car makes you smile: Development and application of an instrument to measure product emotions. Advances in consumer research, 27, 111-117. Hekkert, P., Snelders, D., & Wieringen, P. C. (2003). ‘Most advanced, yet acceptable’: Typicality and novelty as joint predictors of aesthetic preference in industrial design. British Journal of Psychology, 94(1), 111-124. Gaver, W. (2012). What should we expect from research through design?. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 937946). ACM. Hughes, J. A., & Sharrock, W. W. (1997). The philosophy of social research. London: Longman. Jacobs, S. (1988). Evidence and inference in conversation analysis. Communication yearbook, 11, 433-443. Liddament, T. (2000). The myths of imagery. Design Studies, 21(6), 589-606. Lloyd, P., & Busby, J. (2001). Softening up the facts: engineers in design meetings. Design Issues, 17(3), 67-82. McDonnell, J. (2012). Accommodating disagreement: A study of effective design collaboration. Design Studies, 33(1), 44-63. Methodological. (2014). In Merriam-Webster dictionary. Retrieved from Moore, G. E. (2002). Philosophical Papers. London: Taylor & Francis. Sachs, A. (1999). Stuckness' in the design studio. Design Studies, 20(2), 195-209. Schön, D. (1983). The reflective practitioner. New York: Basic. Simon, H. A. (1996). The sciences of the artificial (Vol. 3rd). Cambridge: MIT Press.