Best practices in developing, conducting, and ...

7 downloads 52062 Views 287KB Size Report
Please cite this article as: Woo, S.E., et al., Best practices in developing, ... fying relations between observed variables and/or cases using Big Data) may ...
HUMRES-00552; No of Pages 10 Human Resource Management Review xxx (2016) xxx–xxx

Contents lists available at ScienceDirect

Human Resource Management Review journal homepage: www.elsevier.com/locate/humres

Best practices in developing, conducting, and evaluating inductive research☆ Sang Eun Woo a,⁎, Ernest H. O'Boyle b, Paul E. Spector c a b c

Purdue University, United States University of Iowa, United States University of South Florida, United States

a r t i c l e

i n f o

Available online xxxx

a b s t r a c t This editors' introductory article to the Human Resources Management Review special issue on inductive research methods aims not only to provide an overview of the four main articles, but to provide guidance to researchers and gatekeepers about how best to conduct such research. We address four specific goals in the current article. First, we present a brief overview of each of the four papers. Second, we provide a general background on deduction, induction, and abduction: what they are, how they are distinguished from one another and should be used in a complementary manner, and how our field has moved away from inductive toward deductive paradigms over the last five decades. Third, we shed further light on the current representations of deductive versus inductive approaches in our collective published works, and what can/should be done to achieve a better balance between them as we move forward. Fourth, we offer several “best-practice” recommendations for how best to conduct and evaluate research that does not conform to the prevailing hypothetico-deductive model. © 2016 Elsevier Inc. All rights reserved.

“Not enough theory” is a common criticism of submitted manuscripts offered by reviewers and editors. Indeed, the current zeitgeist of organizational science appears deeply vested in a “top-down,” deductive approach that relies primarily on testing a priori hypotheses. Accordingly, inductive research conceived as “bottom-up,” data-driven, and/or exploratory rarely appears in top-tier outlets. Unfortunately, this broad sentiment against exploratory and inductive research comes at a cost. As articulated by several leading scholars (e.g., Hambrick, 2007; Locke, 2007; Spector, Rogelberg, Ryan, Schmitt, & Zedeck, 2014), a sole reliance on the hypothetico-deductive approach limits the advancement of organizational science (as well as other sciences) and can contribute to research and publication practices that are less than ideal. The absence of inductive research restricts our field to the study of only those questions which have a sufficient theoretical basis and discourages the exploration of new questions for which theory is not yet available. Further, the myriad research topics within human resource management (and even management more broadly) carry with them many important research questions that might benefit from a more empirical and exploratory approach. With this as a backdrop, the goal of this special issue is to facilitate a thoughtful and balanced dialogue on the value that inductive research brings to organizational science and, relatedly, what constitutes high-quality inductive research.

☆ We would like to thank Rodger Griffeth (the former editor-in-chief of Human Resource Management Review) for inviting us to work on this special issue. Also, we gratefully acknowledge the following individuals for their thoughtful responses to our written interview questions as summarized in this article: Neal Ashkanasy; Brad Bell; Gilad Chen; Jason Colquitt; Suzanne S. Masterson; Frederick Morgeson; Steven Rogelberg; Deborah Rupp; and Andrew H. Van de Ven. ⁎ Corresponding author at: Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907, United States. E-mail address: [email protected] (S.E. Woo).

http://dx.doi.org/10.1016/j.hrmr.2016.08.004 1053-4822/© 2016 Elsevier Inc. All rights reserved.

Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

2

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

Our objective for this editors' introductory article to the Human Resources Management Review special issue on inductive research methods is to not only introduce the four main articles, but to provide guidance to researchers and gatekeepers about how best to conduct such research. We will address four specific goals. First, we will provide a brief overview of each of the four papers. Second, we provide a general background on deduction, induction, and abduction: what they are, how they are distinguished from one another and should be used in a complementary manner, and how our field has moved away from inductive toward deductive paradigms over the last five decades. Third, we draw from our interviews with some of the thought leaders (former and current editors of leading journals) within our field to shed further light on the current representations of deductive versus inductive approaches in our collective published works, and what can/should be done to achieve a better balance between them as we move forward. Fourth, we offer several “best-practice” recommendations for how best to conduct and evaluate research that does not conform to the prevailing hypothetico-deductive model. 1. Overview of articles The four articles included in this special issue cover some of the key topics related to inductive research that deserve careful attention – namely, exploratory data analysis (Andrew Jebb, Scott Parrigon, and Sang Eun Woo); Big Data (Samuel McAbee, Ronald Landis, and Maura Burke); grounded theory (Chad Murphy, Anthony Klotz, and Glen Kreiner), and abductive reasoning (Robert Folger and Christopher Stein). The first article by Jebb and colleagues introduces the notion of exploratory data analysis (largely developed by a prominent statistician John Tukey) as a rigorous methodological mechanism for “phenomenon detection” within organizational sciences that uses various statistical and graphical techniques. The authors clarify how exploratory data analysis is (and should be) distinguished from confirmatory data analysis, as well as from some of the “data exploration” efforts that are considered largely problematic when presented as confirmatory (e.g., p-hacking). A clear case is provided for the importance of formally (and openly) distinguishing exploratory from confirmatory data analytic approaches in light of the recent dialogue in the field about replication-related issues. Jebb and colleagues also note that exploratory data analysis allows researchers to maximize the value of data, and provide several examples of how it can be done in practice (e.g., multiple uses of a data set; implementation of graphical/visual analytic methods). In the second article, McAbee and colleagues provide a “cautiously optimistic” perspective on the Big Data opportunities for inductive research in organizational sciences. Specifically, they discuss how Big Data analytics (i.e., a set of techniques for identifying relations between observed variables and/or cases using Big Data) may facilitate organizational researchers' inductive efforts, and illustrate these points by providing a number of specific examples of data-driven research and practice organized by major HR and related topics (e.g., selection, recruitment, performance management). At the same time, the authors also discuss three most-commonly recognized limitations of Big Data analytics (i.e., dustbowl empiricism; overreliance on behaviorism; data veracity) and argue for the importance of organizational scientists' knowledge and insights in interpreting the data. While the first two articles consider the quantitative side of the inductive research (focusing on the role of data and analytic techniques for detecting interesting phenomena), the third article by Murphy and colleagues introduces an example of qualitative data-driven approaches to theory building: grounded theory. Murphy et al. provide an introductory (yet sufficiently detailed) overview of what grounded theory is, how it differs from other inductive qualitative methodologies, and how it may be done in practice. Further, the authors note that the philosophical orientation of grounded theorists is diverse and often diverges from positivist traditions where research is evaluated based on internal and external validity. This calls for a different set of guidelines for ensuring the research quality when developing a grounded theory. In light of this, the authors highlight a set of criteria for building and evaluating the trustworthiness of a grounded theory (i.e., credibility, transferability, dependability, and confirmability), which is becoming a norm among grounded theorists in the broad field of management but has yet to be fully adopted within the HR research community. The fourth article by Folger and Stein significantly extends and enriches this special issue's coverage of the deductioninduction divide within organizational sciences by introducing the concept of abductive reasoning. As we elaborate in the next section, deduction, induction, and abduction are to be clearly distinguished in their respective roles for knowledge building, and all three modes of science should be fully recognized and appreciated within our field. To this goal of diversifying the methodological choices within organizational research, Folger and Stein provide a helpful introduction to abduction as a reasoning process in which a new, revised, or extended theory is developed after observing (or detecting) a surprising phenomenon. 2. Deduction, induction and abduction Philosophers of science often distinguish three specific forms of inference that form the logical basis of a researcher's investigations: deduction, induction, and abduction. Deduction is simply reaching a logical conclusion based on true premises. If all Object As have Property i, then if Object B is an A, it will have Property i. More specifically, if all Employees in a company own an automobile, and if Lynn is an employee of said company, then it follows logically that Lynn owns an automobile. Note that the conclusion merely follows logically from the premises. This is the logic of deductive/confirmatory research. We state hypotheses derived logically from a theory. If the theory and derived hypotheses are correct, then the results should come out as expected. For example, in structural equation modeling we specify a model assumed to be correct that will lead logically to a given structure in the data. Of course, the limitation to deduction is that we do not know whether or not our premises are correct, and our investigation is not a direct test of the premises, only the conclusions that derive from those premises. Thus Lynn might own an automobile even though not every employee owns one, and/or Lynn might not really be an employee (i.e., our assessment of Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

3

Lynn's employment status lacked validity). At best we can conclude that the data supported our premises (i.e., model), but there is uncertainty about why. Again using structural equation modeling as an example, this is the model equivalence problem (MacCallum, 1995, pp. 30–31) that a given pattern of data can be produced by different underlying models that cannot be distinguished by the data themselves. Induction concerns generalizing results beyond the observations at hand. We observe that all employees in a particular location own automobiles and conclude that all employees at the company own automobiles. This is the basis for inductive/exploratory research. We take observations and look for patterns in the data, that is, relationships among variables that can be generalized from our sample at hand to broader populations of interest. Included in this special issue, Jebb and colleagues discuss some of the many quantitative data analytic methods that have been devised for doing this sort of study; McAbee and colleagues provide several examples for how Big Data methodologies (also largely quantitative in nature) may facilitate inductive research and practices in various organizational (e.g., HR) contexts; and Murphy and colleagues discuss a few specific examples of qualitative inductive research methodology such as ethnography, discourse analysis, rhetorical analysis, and content analysis, as well as ground theory as a key topic of their article. Abduction takes things one step farther than induction in not only drawing an inference based on observation, but deriving a feasible (and by some accounts most feasible or best) explanation for a phenomenon. Thus an abductive conclusion might be that all employees own automobiles because their job requires they conduct home visits with their own vehicles. In research practice, abduction is about explanation and the development of theories concerning the reason for phenomena. Folger and Stein (2016-in this issue) in this special issue provide an extensive introduction to the notion of abductive reasoning as a process of building a new theory, revising an existing theory, and/or synthesizing multiple theories into a coherent one, in the face of a surprising event or phenomenon. A healthy science (for a given academic discipline as a whole) requires a good balance of the three forms of inference: inductive/exploratory to discover new knowledge, abductive/explanatory to come up with feasible explanations and theories, and deductive/confirmatory to test the validity of those theories. Too much induction leads to a sterile science that is reduced to a catalog of disconnected facts, whereas too much deduction leads to an inbred and stagnant science that limits itself to what is already known. As noted by philosophers of science, induction is where the real discovery of new knowledge occurs (e.g., Hanson, 1958a, 1958b; Vickers, 2014). Discovery starts with inductive observation and proceeds to abductive explanation (Hanson, 1958b). Once a phenomenon is established, and perhaps tentatively explained, deductive approaches come into play to confirm their validity. Confirmation goes beyond merely replicating results that had been previously observed through inductive means. True confirmation involves the use of new methods that can rule out alternative explanations for the inductive results. For example, using self-report surveys to confirm models that are based on results of exploratory self-report surveys is insufficient. To provide convincing evidence those models should be tested with data derived in a different way, and the more different the better. Models based on self-reports could be confirmed with field experiments in which the proposed antecedent is manipulated and the proposed consequence is assessed in a manner other than self-reports. This use of converging operations is necessary to rule out common biases and confounds, and in the case of surveys, that the results were due to common method variance (Campbell & Fiske, 1959). The organizational sciences and related disciplines (e.g., psychology) have become imbalanced in all but abandoning induction and abduction in favor of a one-size-fits-all deductive approach (Locke, 2007). Top-tier journals typically require an approach that presents the study as being designed to test theory-derived hypotheses in a confirmatory way that suggests the hypotheses preceded the data. Spector (2015) content-analyzed articles in Journal of Applied Psychology and found that the percentage of deductive/confirmatory papers increased from 28% in 1971 to 100% in 2015. As pointed out by Cucina and McDaniel (in press), in far too many cases hypotheses in deductive papers are based on what they call pseudo-theories—explanations based on conjecture, personal opinion, and limited findings that cannot be called true theories. They argue that often the theoretical basis for a hypothesis is itself just another hypothesis. Unfortunately, the demand for a deductive approach is not without adverse consequences. Coupled with confirmation bias (the preference of journals for papers that confirm rather than fail to confirm hypotheses), authors are placed in the untenable position of being implicitly pressured to present exploratory results as if they were confirming a priori hypotheses. This leads to questionable practices, such as selective reporting of only confirming results, and HARKing (hypothesizing after results are known; Kerr, 1998). The consequences of such practices include the publication of too many Type 1 errors leading to the replication crisis in some areas, as well as over-estimating the amount of support there is for hypotheses and theories. The field of organizational sciences is in need of a new direction that takes us back to the balanced application of the scientific method. We should equally value exploratory, explanatory, and confirmatory approaches, with top journals publishing all three types of papers. Inductive/exploratory papers should be valued for their ability to detect new phenomena and new patterns in data. They might make use of qualitative methods that can be of particular value in exploratory studies because they do not constrain informants to making ratings on pre-determined items. Rather they are free to provide ideas and elaborate on context. 3. Status quo: words from the wise (interviews with journal editors) To gain further insights into the current orientation of organizational science on the deductive-inductive spectrum, we (the guest editors) interviewed via email correspondence nine recent and current editors of top journals in the field of management and applied psychology (alphabetically ordered by last name): Neal Ashkanasy (Journal of Organizational Behavior - Former); Brad Bell (Personnel Psychology - Current); Gilad Chen (Journal of Applied Psychology - Current); Jason Colquitt (Academy of Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

4

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

Management - Former); Suzanne S. Masterson (Journal of Organizational Behavior - Current); Frederick Morgeson (Personnel Psychology - Former); Steven Rogelberg (Journal of Business and Psychology - Current); Deborah Rupp (Journal of Management - Former); and Andrew H. Van de Ven (Academy of Management Discoveries - Current). We asked five questions and have summarized the responses below. 3.1. Question 1: why doesn't inductive research appear more often in top journals? We received a variety of answers, but there was convergence on three root causes. The first was a lack of training. For example, a current editor responded that “most OB and I/O Psych researchers have been trained primarily in quantitative methods, with the clear expectation that research will be theory- and hypothesis-driven, rather than exploratory and data-driven.” Similarly, a former editor wrote that the prominence of deduction is in part attributable to “how we teach Ph.D. students how to conduct research.” Beyond a lack of training, many editors attribute it to the widespread belief among authors that inductive research is not well received at leading journals. For example, one editor mentioned that “the ‘game’ across journals basically only rewards deductive research”. Another responded “there is a persistent perception that [journals] focus on quant./deductive/positivist research” and yet another editor responded “most authors I think have the mindset that theory-testing is the only, or perhaps the most likely, route to publishing in top journals.” A final surprising yet common answer is that there is no shortage of inductive research, just a shortage of it being reported as inductive. That is, many studies begin from an inductive or abductive tradition, but are ultimately presented as deductive. One editor stated “a great deal of research involves a long process of trying out loose deductively-formed ideas, learning from these trials, inductively building theory, [but]…[m]any authors feel they can only report on a sub-section/cross-section of the process (the part that conforms to hypo-deductive reporting norms).” Echoing this sentiment, one editor wrote “Authors only see the utility of writing their manuscript in a deductive frame (even if it was honestly inductive research)”. 3.2. Question 2: what sort of changes relative to people's attitudes/orientations toward inductive vs. deductive have you seen? Responses to this question showed a clear divide between editors that have seen no change or relatively little change (e.g., “I'm not sure that I've seen that much of a change,” “I think it is a difficult road because the deductive approach has become so entrenched,” “I'm not sure that I've seen changes in people's attitudes toward inductive vs. deductive”) and those editors that believe changes have occurred (e.g., “I do see some anecdotal increase in people wanting to do mixed methods work,” “especially in the more sociological disciplines like OT and BPS…qualitative methods have now gained wide acceptance,” “I think there has been some movement toward acceptance of inductive research, particularly high quality qualitative research”). Of those that believe change has occurred, the impetus of this change was quite varied with some editors pointing to the establishment of inductively-oriented journals such as Academy of Management Discoveries, special issues such as Journal of Business and Psychology's feature on inductive research, and Journal of Management's recent issue on Bayesian methods. Others pointed to “important recent-yet-historic milestones [such as] Locke's Journal of Management paper on inductive theory building” and “impactful (and buzz-generating) articles in journals like Academy of Management Journal and Administrative Science Quarterly [that] are inductive qualitative articles.” Finally, one editor noted that “the primary driver of change in the management and organizational behavior literature has come about as a direct result of the internationalization of the Academy of Management.” 3.3. Question 3: do you think that the field has struck a good balance between deductive and inductive research? If not, what can be done? Like the previous response, there was substantial divergence in (a) if the status quo was an acceptable balance and (b) if there is an imbalance what could be done to address it. Regarding what the balance should resemble, one current editor responded, “given the maturity of our field, I think it is understandable that we have more deductive than inductive research.” In the same vein, a former editor stated, “I would not define a balance as 50% inductive qualitative and 50% deductive quantitative.” This editor went on to say that a proper balance would resemble “page spaces in journals to match (approximately) whatever that world-wide scholar breakdown” and that the “balance is there at journals like Academy of Management Journal and Administrative Science Quarterly….” The sentiment that some areas or journals struck a better balance than others was also reflected in a former editor's response that, “the field of management is close to the right balance, [but] this is not the case in the industrialorganizational psychology and organizational behavior literature.” Other editors contended that the field is substantially unbalanced. A former editor wrote, “I do not think a good balance has been struck…the pendulum has swung too far in favor of theory-development.” Echoing this, a current editor wrote, “it is evident that we have swung too far in the deductive direction.” Another current editor succinctly responded, “100% no.” For those that saw an imbalance, several suggestions for correction were proposed. Some targeted journals as “first movers” that might engage in balance inducing behaviors (e.g., “All journals need to encourage inductive research actively and overtly. These journals also need to align practices/rewards with welcoming this type of research”). Others suggested the key was in education and training (e.g., “Most organizational scholars are trained in the deductive approach and this is the lens they use when evaluating other research as reviewers. Until this changes…it will be difficult to make wide sweeping change”; “as more students receive training in inductive approaches, we will see more high quality inductive research being conducted and submitted through the review process”). Still others called on prominent researchers or groups of scholars to act in unison to initiate change Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

5

(e.g., “as senior scholars speak out in support of inductive approaches, we will see more movement in the field”; “what is needed is a group such as EGOS to evolve in this field”). 3.4. Question 4: do you see any benefits to publishing inductive research in your journal? Despite the differing mission and scope of each outlet, editors responded with unilateral support of inductive research being published in their respective journal. The reasons for the support included how inductive research “identifies new areas we do not know much or sufficiently about”, and is “among our best sources of new theory building efforts.” Other editors noted that inductive research “can spark new areas of inquiry and help develop new theory in the field” and “open[s] the door to new insights” and can “illuminate phenomena that are of significant practical importance.” Several editors broadened their responses to the field as a whole (as opposed to a specific journal). For example, one editor responded that, “Just as science is served with access to all possible methodological and analytic tools, it is also served by leveraging all possible paradigms…To close down a scholarly path, inductive research, that is so well aligned with our applied phenomenological origins is highly counterproductive to our science and to our scholars.” Another editor indicated that, “Science requires a continuing mix of theory building followed by extensive theory testing.” Germane to the Jebb et al.'s article in this special issue, one response from a current editor was that an increase in inductive research “avoid[s] unintended consequences that can arise from forcing a deductive approach (e.g., post-hoc theorizing).” 3.5. Question 5: if you are to publish an inductive study in your journal, what sort of inductive research would you look for (e.g., what are the elements of inductive research that makes it a good contribution)? Editors responded to this question in a number of ways, but there were some common themes. First, a number of the editors mentioned the need for clarity (e.g., “a clear epistemology linking the research question to the methodology”, “Clearly designed and transparent methodology”, and “methods need to be clearly defined within the article, and need to be rigorously carried out”). There was also convergence on the need for the appropriateness of the method for the research question. For example, a current editor responded, “the sample and context need to be appropriate for the question being asked”. In a similar vein, a former editor stated, “it's about matching the approach and methods to the situation at hand in an effort to generate new knowledge and to have as much confidence as possible in the inferences being made/conclusions being drawn.” Perhaps where there was the greatest level of agreement was that regardless of methodology, “first and foremost the research should be evaluated based on the level of contribution.” This includes, “[t]elling a credible, cogent, and convincing story”, a “very strong conceptual, intellectual and scientific rationale for the questions of interest”, and provides insights that are “important and relevant” and “contribute to the improved management of people at work.” 4. Best-practice recommendations for inductive research Some may argue that the reason why inductive research is so infrequently published in top-tier journals is because most inductive study manuscripts submitted are not “good enough.” But, what constitutes a “good” inductive study in the first place? Without a clear provision of specific guidelines for conducting and evaluating high-quality inductive research, one's assessment of inductive research quality could simply be driven by his/her own subjective opinions (or a subconscious bias) against any research that deviates from what is normally published in the elite journals in the field – i.e., deductive research with a set of explicit hypotheses each accompanied by a lengthy theoretical rationale. It is a vicious cycle, indeed. Also, as noted earlier, this restricted view of “a good scientific contribution” is further perpetuated by the imbalanced training in our field, which puts a disproportionate emphasis on the importance of theory in comparison to the importance of data and phenomenon. In view of this critical deficiency, therefore, we devote the current section to providing several best-practice recommendations for conducting, reporting, and evaluating inductive research in organizational sciences – from the perspectives of both researchers/authors and reviewers/editors. Our recommendations are (at least in part) informed by a careful integration of the four articles included in this special issue, as well as the aforementioned journal editors' interview responses to Question 5. 4.1. Point 1: start with a clear purpose Good inductive research is not intended to test theory-driven hypotheses, but that does not mean data are collected in a vacuum. Rather, an inductive research effort should normally begin with a clear purpose and perhaps with a statement of research questions the study is designed to answer. These questions might precede data collection and inform the methods that are used, but they most certainly need to be incorporated into papers in order to provide the “cogent, and convincing story” one of our editors noted would be needed to convince reviewers of a paper's publishability. Some inductive studies will be designed within a framework that provides focus. For example, a dominant framework in the organizational sciences is the environment-perception-outcome temporal flow. This framework suggests that environmental conditions and events are perceived by individuals and lead to a variety of attitudes, behavior, emotions, and motivations. Such frameworks cut across the study of many organizational phenomena, including leadership, motivation, stress, and teams. This framework can inform research questions, such as what is the impact of leader behaviors on individual performance? Whereas one could rely on established theory to address this question, that reliance will focus the study on what is already known Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

6

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

about leader behavior, and would be unlikely to uncover the potential impact of behaviors that have here-to-fore gone unnoticed and unstudied. It is important, however, not to get too locked into dominant frameworks that assume relationships are due to some temporal order of events. It is potentially dangerous to assume, based on our limited knowledge, which phenomena are antecedents and which are consequences of the variables of interest. For example, stress models have assumed that counterproductive work behavior by employees is a consequence of stressors, but a 5-wave longitudinal study by Meier and Spector (2013) suggests that the behavior might also be a driver of the stressors. So one category of questions that might be addressed inductively has to do with the likely direction of effects. Even in cases where there is no specific framework that informs a study, it is still important to have a clear purpose, and frame it in terms of a question that the study can address. Such questions can merely ask if there are relationships among specific variables or classes of variables. Others might focus on a particular variable and ask what predicts it (e.g., what motivates employees to cyberloaf?), or focus on a behavior and ask what might be the consequences (e.g., does employee cyberloafing have positive or negative effects on firm productivity?). Again, these questions can be addressed deductively with studies designed to confirm theories (e.g., a theory of cyberloafing or a theory of general counterproductive work behavior). But the deductive approach will limit the scope of what variables are investigated, as well as the analyses conducted and the conclusions reached. Thus we need inductive studies to address such questions so they are not constrained in their methods, analyses, and interpretations. 4.2. Point 2: exploit your data Inductive research is, by definition, data-driven. Inductive research is not bound by a priori hypotheses or theoretical expectations for a certain pattern of results to appear. Instead, the first and foremost role of inductive research in scientific progress lies in detecting “new” phenomena of potential interest/significance, which may eventually lead to theory development. For a phenomenon to be reliably detected, you need a large volume of empirical observations (i.e., data). Therefore, within the inductive paradigm, researchers should not be afraid of “exploiting” their data and should work to maximize their utility in all possible ways. The value of data for inductive science may be maximized by exercising “openness” through three major mechanisms: collection, analysis, and sharing of data. 4.2.1. Data collection: think outside of the box Our first recommendation is to be creative with your data collection planning: practice “divergent thinking” to come up with multiple ways in which the data may be fully utilized for inductive (along with other) purposes. Here we consider three specific contexts in which implementing certain data collection strategies may be useful for “getting the most bang for your buck.” First, when collecting organizational survey data for testing a specific set of hypotheses, it is worthwhile to consider adding some extra variables, as well as open-ended questions for qualitative responses. Doing so will afford researchers opportunities for serendipitous discoveries (i.e., detecting novel or surprising patterns in the data that may be of theoretical importance) and/or further probing the phenomenon of interest in greater depth and breadth. Second, for situations where the researcher's primary purpose for the data collection is to gain initial understanding of a relatively unknown content domain or phenomenon (e.g., understanding how employees turn in their resignation notice; see Klotz & Bolino, in press), we recommend allowing as much room as possible for qualitative exploration and theory building, and approaching it in a systematic and rigorous manner (for discussions on the specific ways in which the quality of qualitative research should be evaluated, see Murphy, Klotz, & Kreiner, 2016-in this issue). Third, when working with a large volume of data containing many interesting variables for exploration (e.g., archival HR records obtained from the firm management, social media postings gleaned from internet), the researcher needs to choose what information to harvest from the larger database to pursue his/her interests. In that case, it is advisable to capture the widest possible array of information (e.g., time period, number of variables, multimedia) that allows the researcher to look beyond a particular set of theoretical expectations. As discussed by McAbee and colleagues in this special issue, the availability of “Big Data” affords us the ability to gather more information than what would be necessary for deductive research. 4.2.2. Data analysis: be flexible Both inductive and abductive approaches to science entail an “open attitude” toward the possibility of finding surprising and intriguing patterns in data (Folger & Stein, 2016-in this issue; Jebb, Parrigon, & Woo, 2016-in this issue). To this effect, there is nothing wrong with using a given data set for multiple analytic purposes – including testing a set of specific hypotheses as well as exploring more open-ended questions, as long as each use of data is properly motivated and clearly communicated according to the research and publishing guidelines within the field (e.g., Academy of Management, 2011; American Psychological Association, 2016). We will come back to the issue of transparency in research reporting and publication practices at the end of this editorial, as it deserves a much more detailed discussion. Here, we focus on the intellectual flexibility of the analyst. Breaking away from the rigid deductive/confirmatory hypothesis-testing research approach can be liberating, allowing researchers to increase the flexibility of their approaches to studying their phenomena. Inductive/exploratory methods are very much problem-focused, attempting to better understand what happens and why. As noted earlier, such investigations are typically designed to address a particular purpose or question, and they might be designed with a particular framework or paradigm in mind. For example, a study might explore environmental factors that affect employee health or performance in a particular industry or profession. Stating the purpose in this way assumes that the environment is a driver of the outcomes of interest. The Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

7

researcher is free to include variables in the study based on casual observation, hunches, and personal interest, as well as on prior findings. A multitude of methodologies can be considered, and the use of multiple methods in the same study can address questions about whether findings are method-bound or not. Potential variables that might serve as biases or confounds, or might serve as alternative explanations for observed results can be included. All in all, our best practice suggestion is that authors should thoroughly investigate their data using a variety of techniques including descriptive, graphical, and inferential methods. The purpose is to find patterns in the data, and those patterns are not always linear, so testing for nonlinearity should be considered. Effects are also not always additive, so looking at configurations of scores across variables can be potentially useful. Graphical methods can be quite helpful in illuminating such patterns (Jebb et al., 2016-in this issue). In this special issue Jebb et al. (2016-in this issue) and McAbee et al. (2016-in this issue) provide suggestions for the use of exploratory data analysis and Big Data methods. Potentially useful is the large literature on data mining and data science that is designed for exploratory data analysis (e.g., Aggarwal, 2015; Cios, Pedrycz, Swiniarski, & Kurgan, 2007). Also, qualitative methods are often useful for discovering a new (structural and/or dynamic) pattern in the occurrence of events and socio-psychological relationships in the organizational setting, which may go well beyond the current theoretical understanding and possibly lead to the development of a new theory (Murphy et al., 2016-in this issue).

4.2.3. Data sharing: be collaborative We urge organizational scholars (both as individuals and as an entire scientific community) to work toward increasing data reusability in our research practices through active and open data sharing. Data once used for one researcher's deductive study could be a valuable source of information for another researcher's inductive efforts to discover a new and interesting phenomenon that has not been fully understood. Despite this potential, the field of organizational sciences has not fully embraced the idea of open and active data sharing, nor do we have a proper infrastructure for researchers to do so. In most cases, once data have been collected and used for a specific purpose, they are contained within the boundaries of a particular research group and only shared with a limited (selected) group of collaborators through the researchers' personal network. Such exclusivity in data access severely limits our ability as a field to harness the value of the massive amount of data collected by hundreds and thousands of researchers around the world – very much akin to the problem of food waste and hunger! Luckily, in most cases, research data are made much more transferrable than food; the codified/digitized information can be easily packaged and distributed online, and it often comes without an expiration date. What is sorely needed in our community of organizational sciences, however, is a proper channel (e.g., data repository, online forum) that allows for open registration and downloading of data as appropriate and necessary. The email listserv for Academy of Management's Organizational Behavior division (“OB-LIST”) recently had an interesting thread of discussions on the issue of open-access data for addressing reproducibility-related issues. Although these discussions were contextualized by the need for ensuring ethical, accurate, and reliable reporting of research findings (which we will discuss later in this editorial), we clearly see many ways in which open data sharing can also lead to new, potentially significant discoveries through active collaborations among inductive researchers. While there certainly are cases where it is unsuitable to share one's research data with those outside the original investigator's laboratory due to human subjects or copyright-related concerns, there is little doubt that open data sharing practices in general will facilitate the reusability (hence the value) of data, which would be extremely beneficial for inductive science. 4.3. Point 3: replicate and cross-validate your findings One of the most important elements of science is that findings are shown to be reliable, that is, they can be repeated by the original researcher who first noted a phenomenon, and independently by others. Although this is true for all types of studies, reliability and replicability of findings from inductive studies is particularly important. This is because, by their nature, inductive/exploratory studies can involve a lot of data manipulating, in some cases computing dozens if not hundreds of statistics in an attempt to determine patterns. In doing so, it is likely that some findings will be no more than Type 1 errors, so ruling that out is important. We present four approaches to establishing the reliability and replicability of findings that can be incorporated into an inductive research report. Given the scope of most projects, it is not likely that all can be applied, but our best practice recommendation is that at least one of these methods should be reported, with more than one being an ideal case. 4.3.1. Cross-validation Assuming sufficient sample size, a cross-validation strategy would be a relatively simple and straight-forward means of demonstrating the reliability of a finding. A sample can be randomly divided into two parts, which could be equal or unequal in size. With large datasets, k-fold cross-validation can be performed, dividing the sample into as many at 10 subsamples to assure that results can be reproduced (Cios et al., 2007). If results are comparable across folds, one can combine results, either by analyzing all data together, or by averaging results across folds. An important limitation to cross-validation is that most organizational studies using commonly applied methods do not have sufficient sample size to provide adequate power to test all results in a study. For example, the typically small effect sizes in moderator analysis (Aguinis, Beaty, Boik, & Pierce, 2005) would result in low power if samples were split. Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

8

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

4.3.2. Replication A replication is the repeating of a study to see if results from the first will be found in the second. Exact replications reproduce the methodology of the initial study precisely, where less than exact replications might differ in some meaningful ways, for example, in the instruments, populations, or settings. The replication also might include additional variables. A single paper can contain multiple samples which would serve as replications of one another. Like cross-validation (which is dividing a single sample into parts), one would compare the samples, and if no differences were found, data could be combined. 4.3.3. Constructive replication A constructive replication is one that uses different methods to verify that the results of an investigation can be reproduced. A constructive replication is a means of applying converging operations to rule out that reliable results are method-bound, that is, due to the method used rather than the substantive variables of interest. As noted earlier, shared biases and common method variance can be controlled through the use of alternative methodologies. Although a constructive replication can be useful in showing that a phenomenon is reliable, it is not always a good substitute for an exact replication. This is because studies that use very different methodologies could yield the same observed relationship for different reasons. In other words the measures and methods used might not reflect the same underlying constructs. Thus replication and constructive replication serve somewhat different purposes. 4.3.4. Comparisons with prior literature Findings from an inductive/exploratory study might not always be unique, as it is possible that similar data patterns have been reported in the literature. Published (and unpublished) reports of the same or overlapping findings can be used as evidence that the current findings are reliable. This can be especially helpful when data are difficult to collect, thus limiting sample sizes. While there are a number of resources that can be used to locate overlapping studies (e.g., PsycInfo), a new potentially useful tool is MetaBUS (Bosco, Steel, Oswald, Uggerslev, & Field, 2015), an online search engine that mines a set of journals in the organizational sciences for univariate and bivariate statistics. We recommend that when interesting patterns are found, researchers use available resources to see if at least some of the patterns they are finding have been reported in the past. Being able to cite comparable results would provide some confidence in the reliability of findings reported in an inductive paper. 4.4. Point 4: be transparent in reporting Current practices with many journals that all but required a deductive approach have led to authors positioning papers as deductive even when the underlying research was not. This masking of inductive and abductive research is at the heart of the debate on questionable research practices (QRPs; O'Boyle, Banks & Gonzalez-Mule, in press). As discussed in O'Boyle et al. (in press) recent article on “Chrysalis Effect,” QRPs include such practices as dropping hypotheses that fail to achieve statistical significance, hypothesizing after results are known (HARKing; Kerr, 1998), selectively deleting outliers, rounding off p-values (Bruns & Ioannidis, 2016), and a host of other practices Bedeian, Taylor, and Miller (2010) classify as cardinal sins and various misdemeanors. These questionable practices likely lead to an inflated Type I error rate in our literature, which makes it difficult to know which phenomena are real and which are merely due to sampling error. In the fervor to mitigate the very serious issue of QRPs, exploratory research has sometimes been wrongly implicated as a contributing factor. Jebb et al. (2016-in this issue) review how exploratory research is often confounded with QRPs. The crux of their argument is that inductive research, or abductive research for that matter, is not questionable when it is presented as such. The questionable aspect is the masking of research as deductive when in fact it is not. A better name for questionable research practices is questionable reporting practices (Wigboldus & Dotsch, 2015). In fact, it is difficult to imagine any of the practices purported to be QRPs as questionable or ethically ambiguous if fully reported. This is the importance of transparency. QRPs cannot exist when transparency is present and vice versa. If the reader of a given paper is aware of the changes that occurred through the research and publication process, then the appropriateness of adding or dropping hypotheses, altering data, changing control variables, etc. is the reader's prerogative. Although transparency is a topic that spans all areas of best practices in research, what is germane to this special issue is that transparency allows for a clear ethical and scientific demarcation between exploratory research and exploratory research posing as confirmatory. In view of this, below we provide four specific recommendations for how research that diverges from the hypothetico-deductive model should be reported and evaluated. First and most important of all, authors should be honest and report an inductive paper as inductive, and an abductive idea/theory as post hoc and based on the data. Second, the paper should explicitly state the scope of the dataset that was explored. If only certain variables were looked at, that should be mentioned. For example, with large archival sets, there could be a brief overview of the archive, and then an explanation of which variables were chosen, and why (e.g., limited the study to only personality traits and aspects of performance). This should be more than the usual “the data reported here were part of a larger dataset”. More detail than that is needed. Third, authors should clearly communicate the scope of their data analysis efforts – reporting not just what “worked,” but also what did not. If only some of the findings are going to be reported in the paper itself (e.g., only including statistically or practical significant results), possibly due to the journal page limit or other considerations, this should be clearly reported and justified. Fourth, authors should keep accurate and detailed records of what they do not report so if someone wants to know about variables not in the paper, the necessary information could be easily generated upon request. It is also highly advisable to make those results readily available as online supplemental materials, if possible. On the other hand, if the authors had chosen to consider only a subset of all the Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

9

possible analyses based on their purpose for the paper, they should not have to go back to the data to provide results that were not considered in the first place. 4.5. Concluding remarks: moving forward In addition to offering recommendations for authors, we also call for the gatekeepers in our field (e.g., journal editors/reviewers and educators) to implement a few tangible strategies for further facilitating high-quality inductive research publications. First, journals are urged to explicitly state how many inductive papers they receive and how many are subsequently published. Offering transparency on submission frequencies and acceptance rates may demonstrate to researchers that the lack of inductive research in top journals is attributable to a lack of submissions, not a bias against exploratory research. Beyond documenting the number of inductive submissions, we also encourage journals to actively promote inductive submissions. It would appear that acknowledging a willingness to publish inductive research has not accomplished the goal of a more balanced distribution of exploratory and confirmatory work. Therefore, if a passive willingness is insufficient, then active solicitation is needed such as the 2015 special issue on inductive research in Journal of Business and Psychology. Rather than entire issues devoted to induction, one suggestion would be for journal editors to actively solicit proposals from leading scholars to conduct exploratory research. Once a proposal is accepted, then the journal would reserve space for its publication. If authors begin to see inductive research published on a regular basis, particularly in prestigious journals, then their likelihood of conducting exploratory research will increase and their likelihood of presenting exploratory work as confirmatory may decrease. Ultimately, the motivation for anything other than total transparency is reduced. Also, more explicit author guidelines and institutional methods training should be offered toward the goal of increasing the number of high-quality inductive research submissions, as well as expanding the pool of researchers capable of conducting rigorous inductive research in the first place. As we noted earlier, whereas there are enough resources on conducting confirmatory research to fill entire libraries, the question of what defines rigorous exploratory research is largely unanswered. Here again, we call on the editors of leading journals to not only be receptive to inductive research, but to replace the ambiguity of their expectations of inductive research with clear guidance as to what is expected from an exploratory submission. For example, for a post-hoc finding that emerged from the data, does it need to be verified with an independent sample? Is statistical significance testing acceptable for exploratory research? Relatedly, Big Data often includes millions of observations where traditional tests of statistical significance are inappropriate (McAbee et al., 2016-in this issue). In the Big Data context, how is practical significance demonstrated? Beyond informing authors of what high quality inductive research resembles, this will also provide guidance to reviewers on how to evaluate an inductive submission. Good science is as much about discovery as it is confirmation. Our contention is that past and current trends have led to an overreliance on confirmatory research. Our intent is not to diminish confirmatory research or discourage its use in human resource management. Rather, our intent is to strike a better balance between the three forms of inference. Induction, abduction, and deduction are complementary and symbiotic to one another and no one form can stand on its own. The recommendations here and in the four articles presented in this special issue aim to encourage researchers to conduct, accurately report, and submit their inductive and abductive research to journals, and for gatekeepers to train, appropriately evaluate, and publish inductive and abductive research. A better balance of the three forms of inference will go a long way in advancing the field of human resource management as well as the broader organizational sciences. References Academy of Management (2011). Ethics of research & publishing video series. Web document retrieved from http://aom.org/About-AOM/Ethics-of-Research— Publishing-Video-Series.aspx. Aggarwal, C. C. (2015). Data mining: The textbook. New York City: Springer. Aguinis, H., Beaty, J. C., Boik, R. J., & Pierce, C. A. (2005). Effect size and power in assessing moderating effects of categorical variables using multiple regression: A 30year review. Journal of Applied Psychology, 90(1), 94–107. http://dx.doi.org/10.1037/0021-9010.90.1.94. American Psychological Association (2016). Data transparency appendix examples. A web document retrieved from http://www.apa.org/pubs/journals/apl/datatransparency-appendix-example.aspx. Bedeian, A. G., Taylor, S. G., & Miller, A. N. (2010). Management science on the credibility bubble: Cardinal sins and various misdemeanors. Academy of Management Learning & Education, 9(4), 715–725. Bosco, F. A., Steel, P., Oswald, F. L., Uggerslev, K. L., & Field, J. G. (2015). Cloud-based meta-analysis to bridge science and practice: Welcome to metaBUS. Personnel Assessment and Decisions, 1, 3–17. Bruns, S. B., & Ioannidis, J. P. (2016). p-Curve and p-hacking in observational research. PloS One, 11(2), e0149144. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. http://dx. doi.org/10.1037/h0046016. Cios, K. J., Pedrycz, W., Swiniarski, R. W., & Kurgan, L. A. (2007). Data mining: A knowledge discovery approach. New York City: Springer. Cucina, J. M., & McDaniel, M. A. (2016). Pseudotheory proliferation is damaging the organizational sciences. Journal of Organizational Behavior. http://dx.doi.org/10. 1002/job.2117 (in press). Folger, R., & Stein, C. M. (2016). Abduction 101: Reasoning processes to aid discovery. Human Resources Management Review (in this issue). Hambrick, D. C. (2007). The field of management's devotion to theory: Too much of a good thing? Academy of Management Journal, 50(6), 1346–1352. http://dx.doi. org/10.5465/AMJ.2007.28166119. Hanson, N. R. (1958a). Patterns of discovery. Cambridge: Cambridge University Press (a). Hanson, N. R. (1958b). The logic of discovery. The Journal of Philosophy, 55(25), 1073–1089. http://dx.doi.org/10.2307/2022541 (b). Jebb, A. T., Parrigon, S., & Woo, S. E. (2016). Exploratory data analysis as a foundation of inductive research. Human Resources Management Review (in this issue). Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217. http://dx.doi.org/10.1207/ s15327957pspr0203_4. Klotz, A. C., & Bolino, M. C. (2016). Saying Goodbye: The Nature, Causes, and Consequences of Employee Resignation Styles. Journal of Applied Psychology (in press).

Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004

10

S.E. Woo et al. / Human Resource Management Review xxx (2016) xxx–xxx

Locke, E. A. (2007). The case for inductive theory building. Journal of Management, 33(6), 867–890. http://dx.doi.org/10.1177/0149206307307636. MacCallum, R. C. (1995). Model specification: Procedures, strategies, and related issues. In R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 16–36). Thousand Oaks, CA: Sage. McAbee, S., Landis, R., & Burke, M. (2016). Inductive reasoning: The promise of big data. Human Resources Management Review (in this issue). Meier, L. L., & Spector, P. E. (2013). Reciprocal effects of work stressors and counterproductive work behavior: A five-wave longitudinal study. Journal of Applied Psychology, 98(3), 529–539. http://dx.doi.org/10.1037/a0031732. Murphy, C., Klotz, A., & Kreiner, G. (2016). Blue skies and black boxes: The promise (and practice) of grounded theory in human resource management research. Human Resources Management Review ( in this issue). O'Boyle, E. H., Banks, G. C., & Gonzalez-Mule, E. (2016). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management (in press). Spector, P. E. (2015). Induction, deduction, abduction: Three legitimate approaches to organizational research. Video lecture for consortium for advancement of research methods and analysis. University of North Dakota (https://razor.med.und.edu/carma/video). Spector, P. E., Rogelberg, S. G., Ryan, A. M., Schmitt, N., & Zedeck, S. (2014). Moving the pendulum back to the middle: Reflections on and introduction to the inductive research special issue of Journal of Business and Psychology. Journal of Business and Psychology, 29, 499–502. http://dx.doi.org/10.1007/s10869-014-9372-7. Vickers, J. (2014). The problem of induction. The Stanford encyclopedia of philosophy (spring 2014 edition) (August 13, 2015. Retrieved from http://plato.stanford.edu/ archives/spr2014/entries/induction-problem/). Wigboldus, D. H., & Dotsch, R. (2015). Encourage playing with data and discourage questionable reporting practices. Psychometrika, 1–6.

Please cite this article as: Woo, S.E., et al., Best practices in developing, conducting, and evaluating inductive research, Human Resource Management Review (2016), http://dx.doi.org/10.1016/j.hrmr.2016.08.004