The Skeptic Hijack

10 downloads 116 Views 127KB Size Report
Apr 4, 2014 - troturf campaigns and blogs churning out contrarian messages. ... All these themes suggest that climate science backing the .... Party (Tech.
The Skeptic Hijack David Huard∗ April 4, 2014

Abstract A vocal fraction of U.S. population is now dismissive of the consensus view on climate change in spite of scientific evidence bordering on certainty. This paper suggests that organized climate change denial propagates memes that hijack the inferential process of genuine skeptics. Once skeptics accept those memes as plausible, any piece of evidence about anthropogenic climate change is systematically interpreted such that it reinforces the belief that climatology is junk science and climate change a scam, their skepticism gradually turning into denialism. This paper presents a Bayesian model for such an inferential hijacking mechanism, the key ingredient being a selective switch in the target of inference depending on whether or not evidence supports the anthropogenic global warming theory. Climate change is used here as an illustrating example, but the same ideas apply to a large class of beliefs. While the model proposed is only a crude and idealized representation of the cognitive processes involved in decision-making, it seems nonetheless useful in understanding a possible pathway leading from skepticism to denialism and may suggest counter-measures. Keywords: denialism ; Bayesian analysis ; skepticism ; conspiracy ; inference Disclaimer I’m not a cognitive scientist and this paper is a humble but clumsy attempt to contribute to the field. The paper has been rejected twice due to lack of empirical evidence supporting the proposed mechanism. If you are aware of experimental data that could be used to falsify the ideas presented herein, please tell me about it.

1 Introduction While there are a number of legitimate reasons to disagree over the policy response to rising greenhouse gases concentrations (Hulme, 2009), the scientific basis for the reality of anthropogenic global warming (AGW) is bordering on certainty (IPCC, 2013). Despite this scientific consensus, a sizable fraction of the U.S. population denies the reality of climate change or its anthropic origin (Smith & Leiserowitz, 2012). An analysis of ten Gallup surveys by McCright and Dunlap (2011) indicates that conservative white males in the U.S. are more likely to endorse denialist views on climate change than are other Americans. Even more interesting is the observation that ∗ [email protected]

1

denialist views are more pronounced in those who self-report understanding global warming very well (Leiserowitz, Maibach, Roser-Renouf, & Hmielowski, 2011). That opposition to climate science generally increases with science literacy suggests to Lewandowsky, Gignac, and Oberauer (2013) that rejection of AGW is “a cognitive style rather than a deficit of knowledge or ability.” Moreover, “one cognitive style that has been repeatedly implicated in science denial is conspiratorial thinking, also known as conspiracist ideation.” While climate scientists are sorely tempted to dismiss deniers as irrational thinkers (read conspiracy nut-jobs), it seems more fruitful to instead assume they are acting as rational agents and strive to understand the rationality model underlying denialism. As argued by Wilkins (2009) about his own prior belief in creationism, a bounded model of rationality can explain the adoption of anti-science beliefs. This paper thus proposes a simple cognitive model describing how rational skeptics can progressively turn into deniers holding extremely polarized beliefs. Conspiracy ideation is a key ingredient in this model, triggering a switch of inferential target dependent on whether evidence supports AGW or not. The conceptual model proposed is an idealized and simplified version of the reasoning process and does not preclude other ways one can turn into a denier, such as group conformism. Nonetheless, it provides an useful framework to reflect on denialism and identify potential responses.

2 Plausible reasoning Probability theory is nothing but common sense reduced to calculations. Laplace, 1819

In their grandly titled book “Probability Theory: The Logic of Science”, Jaynes and Bretthorst (2003) discuss inference mechanisms using the idea of a robot capable of plausible reasoning. Jaynes lays out three desiderata for the reasoning abilities of his robot: 1) Degrees of plausibility are represented by real numbers, 2) Qualitative correspondence with common sense, and 3) Consistency. From these desiderata he derives quantitative rules to manipulate plausibility statements, that is, an algebra to conduct inference on hypotheses based on evidence and prior assumptions. It turns out that these rules are the exact same ones governing probability theory. What Jaynes thus suggests is that his thinking robot, using the laws of probability to update its belief, would “think” in ways that are qualitatively similar to ours. Besides Jaynes intuition, there are a few other reasons to believe that probability theory, or Bayesian analysis, is a useful model of human reasoning. One is the evolutionary perspective on rationality: “Humans are an intensely social species, and thus should possess cognitive machinery designed to handle the computational problems associated with social life” (Haselton et al., 2009). In other words, humans are more likely than not to have evolved inferential abilities that, on average, reflect probabilities in the external world. Experimentally, Griffiths and Tenenbaum (2006) have

2

shown how everyday cognitive judgements reveal a “close correspondence between people’s implicit probabilistic models and the statistics of the world”. In a follow-up study, Lewandowsky, Griffiths, and Kalish (2009) further concludes that “the prediction functions produced by individual participants conformed to those expected from an optimal Bayesian agent using the appropriate actual prior.” A Bayesian model of judgement is thus used in the following to illustrate how the reasoning abilities of skeptics can be hijacked. 1 Let’s first define H as the anthropogenic global warming (AGW) hypothesis: ”Human activity is the cause for climate changes well above recent natural variability”. The robot’s belief in H is quantified by p(H), the probability of H being true, with probabilities understood here in the sense of a subjective degree of confidence in a statement. Starting with an initial, subjective, assessment of p(H), our objective is to update it iteratively with a series of evidences E1 , E2 , . . . , En gleaned from mass media news stories, conversations or blog posts. This updating process is conducted using Bayes’ theorem: p(H|E) =

p(E|H)p(H) , p(E)

(1)

which follows directly from the product rule of probability theory.2 In Bayesian analysis jargon, p(H) is called the prior, the probability for an hypothesis independently from the current evidence. The denominator p(E) is a normalizing constant that can be interpreted as a way to cancel the probability for E found in the numerator, so that only the change in probability related to assuming H is true remains. One way to compute it numerically is to sum the probability for E conditional on the complete set of hypotheses. That is, if we let H stand for the complement of H, AGW is bunk, then either H or H must be true3 and p(H|E) + p(H|E) = 1. Applying Bayes’ theorem to both terms yields equation p(E) = p(E|H)p(H) + p(E|H)p(H), which is amenable to computation. To illustrate how the robot would use this updating mechanism, let E stand for “Ocean levels are rising due to increased temperature and glacier melt” and evaluate p(H|E). If we set the prior p(H) to a low value of 0.05 (weak initial confidence in H) and assign plausible values for the likelihoods p(E|H) = .8 and p(E|H) = .1, we obtain p(H|E) ≈ .3. In other words, given evidence E, the robot’s confidence in H has increased from 0.05 to approximately 0.3.

3 The human factor Although this may be a simple calculation to perform for a robot, it’s hard to believe a human brain capable of computing this normalizing constant intuitively. A more plausible representation of our mental process, one which seems easier to carry out 1 Note that this paper only attempts to describe the logical reasoning of bona fide skeptics, not those knowingly engaged in deception or manipulation. 2 The product rule is simply p(A, B) = p(A|B)p(B) = p(B|A)p(A), where p(A, B) stands for the joint probability of A and B being true and the vertical bar | stands for ”given”. 3 By convention, a true hypothesis has probability one and a false hypothesis has probability zero.

3

intuitively, would probably look like a ratio of probabilities, odds, between competing hypotheses: p(H|E) p(E|H) o(H|E) ≡ = o(H). (2) p(H|E) p(E|H) With this heuristic, only valid for two contradictory hypotheses, odds for H are updated from their initial o(H) value by multiplication with a ratio of likelihoods. Using the same numerical values as in the preceding example, we find that the odds for H increase from 1:19 to 8:19 given evidence about ocean levels. This formalism in terms of odds rather than absolute probabilities is not a necessary condition for what follows but makes the presentation more compact and easier to follow. Now what happens if after processing E1 we want to proceed with a second piece of evidence E2 ? The most straightforward option is to evaluate o(H|E1 , E2 ), but this approach breaks down in real life, where not just two pieces of evidence are processed but thousands of them over many years. It seems impossible to remember and process every piece of evidence that went into the construction of our beliefs each time new evidence presents itself. A fallback strategy is to sequentially update the probability for H and forget about the evidence leading to it. Defining

o0 = o(H),

oi =

and

P(Ei |H) oi−1 , P(Ei |H)

(3)

we have a recursive function that updates the odds for H dependent only on the odds from the previous assessment and the evidence at hand Ei . This second heuristic describes a plausible mechanism describing how one could process multiple pieces of evidence sequentially to update its confidence in an hypothesis. Note that an obvious weakness of this strategy is that it neglects the joint probability for evidence. Instead of evaluating p(E1 , E2 , . . . , En |H), it evaluates p(E1 |H), p(E2 |H), . . . , p(En |H) independently. This leaves the robot open to persuasion by repetition. Indeed, if a single piece of evidence E is packaged into many different news stories E1 , E2 , E3 , a joint analysis would consider that P(E1 , E2 , E3 |H) = P(E|H) while the sequential analysis would likely miss the deception and update three times its odds instead of only once. In its current state, our robot is also vulnerable to deceptive arguments. Indeed, let E stands for ”Variations in solar radiance correlate with observed warming”. On its own this piece of evidence would lower the odds for H. The issue is that although E is factually wrong, our robot has no way to find out and dismiss the evidence; it is climate illiterate (Clement, Kirtman, & Pirani, 2011). Let’s give our robot some background knowledge K on physics, chemistry and climate dynamics. Now the faulty evidence E linking observed warming with solar radiations contradicts this knowledge K, whether H is true or not. In other words, p(E|H, K) = p(E|H, K) = p(E|K) so that o(H|E, K) = o(H|K) and odds remain unchanged by this faulty evidence. Being climate literate, our robot is now able to discriminate valid evidence from fallacious arguments. Naively, we might expect that all climate literate robots, whatever their prior assessments for o(H), would eventually converge to similar a posteriori probability assessments for H after being exposed to multiple unbiased pieces of evidence. A famous 4

experiment by Lord, Ross, and Lepper (1979) however contradicts this optimistic view. In this experiment, subjects were given two studies about the death penalty, one confirming and the other disconfirming the deterrent efficacy of the measure. Both proponents and opponents of capital punishment rated the evidence as more convincing when it confirmed their own views and reported a corresponding shift in their beliefs. Providing people with the same information, instead of narrowing their differences of opinion, further polarized their beliefs, due to “an inclination to retain a currently favoured hypothesis” (Klayman, 1995). Here we suggest another mechanism by which beliefs become polarized, hinging on a skeptical approach to beliefs. In the following, being skeptical is understood as being willing to question the validity of the evidence presented, as well as one’s own background knowledge. Faced with evidence, instead of simply updating o(H|E, K), a skeptic would entertain the notion that something in its knowledge K might be wrong, that is, evaluate o(K|H, E). It could also question the credibility of the evidence and its source by instead evaluating o(E|H, K). The target of the inferential process can either be H, K or E, depending on the context and the plausibility of the different pieces of information entering the inference. This ability to select the target of inference is a necessary asset for critical thinking. For example, imagine our robot being fed a newspaper article about the invention of a perpetual motion machine. A credulous robot would use this evidence to update its belief in the existence of a free and infinite source of energy; a robot with scientific literacy would simply ignore the evidence. However, a skeptical robot would take time to evaluate which option is more likely: free energy may be possible contrary to all physical conservation laws, the inventor is deluding himself or the journalist did not understand the invention. This willingness to switch the inferential target is both a source of intellectual strength and a vulnerability that can be exploited.

4 The skeptic hijack Dunlap and McCright (2011) nicely portray the plumbing of organized climate change denial. In short, conservative think tanks are funded by the fossil fuel industry, corporate interests and conservative foundations to set-up fake independent front groups, astroturf campaigns and blogs churning out contrarian messages. These fabricated news end up in the public arena, pushed by ideologically friendly blogs, politicians and media outlets, replaying ad nauseam key talking points in an deafening but vacuous echo chamber. Although this depiction sounds very much like a conspiracy theory of its own, the influence and money trail is rather well documented (Hoggan & Littlemore, 2009; McCright & Dunlap, 2010; Oreskes & Conway, 2010). Moreover, similar tactics and the same fake experts have been used to discredit the link between smoking and cancer, CFC and the ozone hole and sulphur emissions and acid rains. Professional climate deniers have a wide range of strategies to convince the public that climate change is not a serious issue, but here we focus on two specific memes. The first one is that climate change is not happening and/or is mostly natural. The effect of this meme is to make AGW questionable, to raise doubts about its certainty and reduce the odds for H so it becomes a legitimate target for inference. Arguments against H 5

are usually manufactured using a variety of techniques including fake experts, cherrypicking results and logical fallacies (Farmer & Cook, 2013). Quality does not matter so much as long as it’s repeated ad nauseam from many apparently independent news sources. The second meme is what Mckewon (2012) calls fantasy themes and include “Climate Scientists are Rent-seeking Frauds”, “Climate Science is a Religion”, “Climate Science is a Left-wing Political Conspiracy” and “Climate Change Mitigation is a Money-spinning Scam”. All these themes suggest that climate science backing the consensus view is politically, financially or ideologically motivated. Put bluntly, this meme suggests that climate science evidence is doctored to agree with AGW. This conspiracist ideation is the key ingredient for the skeptic hijack, because once someone believe this meme, there is no point in using the evidence to weigh the AGW hypothesis, since, by definition, it will support it. The hijacked skeptic thus reasonably switches the inferential target from the hypothesis under consideration, H, to the evidence presented E. In the Bayesian formalism, the inference thus switches from evaluating the odds of H being true (o(H|E, K)), to evaluating the odds of E being true (o(E|H, K)). Let our skeptical robot be influenced by these two memes: its odds o(H) are below one (H is more likely than H) and it suspects climatologists doctor the evidence to support the AGW hypothesis. Faced with evidence E − negating AGW, the robot will use Eq. (3) to update o(H|E − , K), further lowering the odds for H. On the other hand, faced with a piece of evidence E + backing the AGW hypothesis, the robot switches the target of inference and evaluates o(E + |H, K); given H (AGW is bunk), the skeptic has reasons to believe E is a fallacious or unreliable piece of evidence. This, in turn, strengthens its belief that the scientists behind the study (and by association, other climatologists) are frauds. Hence as long as its constantly fed with evidence, for and against AGW, this switch of inferential target makes sure that both the probability of AGW being true and the credibility of climatologists inexorably converges to zero (see figure 1). Our skeptical robot has turned into a denier and no amount of climatic evidence can reverse this trend. A consequence of this inferential switch is that hijacked skeptics would eventually come to the conclusion that most climate scientists are either frauds or incompetent. The probability of so many climate scientists being independently crooked should appear vanishingly low to rational thinkers, raising a red flag about the validity of their judgement. One way to make this extraordinary coincidence more likely is by assuming that climate scientists take part in a grand conspiracy. Indeed, not only is the existence of a conspiracy among climatologists, or at least of common interests, heavily promoted by the fantasy themes outlined above, surveys among climate blog visitors have shown a strong association between conspiracist ideation and the rejection of science (Lewandowsky, Cook, Oberauer, & Hubble-Marriott, 2013; Lewandowsky, Oberauer, & Gignac, 2013), paralleling results from surveys among the U.S. population (Lewandowsky, Gignac, & Oberauer, 2013).

6

Figure 1: Flow chart of the inferential process of hijacked skeptics.

5 Discussion One testable predictions from this yet unverified model is that hijacked skeptics would apply an apparent double-standard to evidence: evidence against AGW would be accepted as is, while evidence for AGW would be carefully dissected for methodological flaws. While this behaviour appears intellectually problematic from the outside, it is perfectly consistent with the inferential process just described. In the case where evidence supports AGW, careful evaluation of the methodology is warranted since the evidence itself is the inferential target. On the other hand, when evidence contradicts AGW, E is interpreted as a given to infer the probability of the AGW hypothesis. While there are similarities between the skeptic hijack and the type of confirmation bias described by Lord (called “hypothesis-preserving evaluations of credibility” by Klayman), the latter does not seem to involve a switch in the target of inference, but rather a joint probability estimate of the hypothesis and the evidence. What Lord concludes about this confirmation bias is however also true of the skeptic hijack: “their sin lay in their readiness to use evidence already processed in a biased manner to bolster the very theory or belief that initially ”justified” the processing bias. In so doing, subjects exposed themselves to the familiar risk of making their hypotheses unfalsifiable – a serious risk in a domain where it is clear that at least one party in a dispute holds a false hypothesis...”

7

To clarify, although the inferential switch is a rational response to prior assumptions (climate science being “motivated” by considerations other than truth), this prior assumption is never tested against evidence. Each inferential step is thus carried out rationally, but the process as a whole is irrational, since it leads to a predetermined conclusion whatever the evidence. Analyzing their own judgement, I would suspect deniers are convinced of their own rationality and pride themselves of the intellectual high-ground they occupy, as well as of their capacity to defend their opinion (Mercier & Sperber, 2011). To the outside world, however, their stance rather appears delusional.

6 Conclusion Dunlap and McCright (2011) conclude that the denial machine has “certainly had a profound impact on the way in which climate change is perceived, discussed, and increasingly debated particularly within the US.” Denialism puts at stake not only climate policy, but the acceptation of science in general with potentially dire consequences. The recent vaccine scare is a case in point, where the rejection and distrust of medical science has led to a resurgence of curable diseases (Goldacre, 2010). Denialism should hence be viewed not as a fringe movement but as a challenge to public policy. It is not obvious however what response should be given to denialism. Lewandowsky, Ecker, Seifert, Schwarz, and Cook (2012) offers recommendations on how to respond to misinformation. One such recommendation is to provide alternative narratives that fill the gap created by debunking and that are more compelling than the myths they replace. Another option suggested by The Skeptics Society’s executive director Michael Shermer is to refuse engaging in a debate against deniers, and rather discuss differences in beliefs systems (Shermer, 2002), e.g. science versus pseudo-science. Yet another option is to expose professional deniers for what they are, public relation staff rather than “concerned citizens” (Diethelm & McKee, 2009), so that evidence stemming from those sources is interpreted as motivated propaganda and more easily dismissed. A few other suggestions might be proposed inspired by the hijack model. One maybe to refrain from referring to the IPCC reports when addressing skeptics. Often, the IPCC is cited as the ultimate reference for all things related to AGW. It is fairly easy in this context for deniers to dismiss this one authoritative source as being tainted, along with all participating scientists. In the conspiracy logic, the fact that independent lines of evidence support AGW is better explained by their relation to the IPCC and its motives than by an underlying physical reality uncovered by independent observers. The model also suggests that information about climate science could more easily be accepted by deniers if it does not relate to climate change. Discussing topics in climate dynamics and meteorology such as hurricanes, the jet stream, the meridional circulation and ice-albedo feedback without reference to climate change could help build climate literacy (K) in a non-confrontational way. Arguments from deniers would then have to fold into this body of knowledge, limiting their influence on skeptics. In the same vein, there is a risk in oversimplifying climate science in the name of “public understanding” because it could reinforce skeptics’ illusion of mastering this topic, and consequently underestimate the expertise and credibility of professional climatologists. It is hard not to see in the denialist movement the Dunning-Kruger effect at work, 8

whereby unskilled individuals not only reach erroneous conclusions, but suffer from an illusory perception of their superiority due their incapacity to even recognize their incompetence (Kruger & Dunning, 1999). This suggests that the potential victims of the skeptic hijack are “wannabe skeptics”, individuals with enough confidence in their understanding of science to question arguments of authority but not enough to grasp the magnitude of their ignorance. If widely adopted, the resulting distrustful and defiant stance against science and its institutions has the potential to weaken the network of trust relationships so necessary for the progress of science.

References Clement, A., Kirtman, B., & Pirani, A. (2011, May). Climate Literacy as a Foundation for Progress in Predicting and Adapting to the Climate of the Coming Decades. Bulletin of the American Meteorological Society, 92(5), 633–635. doi: 10.1175/2010BAMS3161.1 Diethelm, P., & McKee, M. (2009). Denialism: what is it and how should scientists respond? Black is white and white is black. European Journal of Public Health, 19(1), 2–4. doi: 10.1093/eurpub/ckn139 Dunlap, R. E., & McCright, A. M. (2011). Organized Climate Change Denial. In The oxford handbook of climate change and society (pp. 144–160). Farmer, G. T., & Cook, J. (2013). Understanding Climate Change Denial. In Climate change science: A modern synthesis (Vol. 1). Dordrecht: Springer Netherlands. doi: 10.1007/978-94-007-5757-8 Goldacre, B. (2010). Bad Science. Random House. Griffiths, T. L., & Tenenbaum, J. B. (2006, September). Optimal predictions in everyday cognition. Psychological science, 17(9), 767–73. doi: 10.1111/j.14679280.2006.01780.x Haselton, M. G., Bryant, G. a., Wilke, A., Frederick, D. a., Galperin, A., Frankenhuis, W. E., & Moore, T. (2009, October). Adaptive Rationality: An Evolutionary Perspective on Cognitive Bias. Social Cognition, 27(5), 733–763. doi: 10.1521/soco.2009.27.5.733 Hoggan, J., & Littlemore, R. (2009). Climate cover-up. D&M. Hulme, M. (2009). Why We Disagree About Climate Change: Understanding Controversy, Inaction and Opportunity. Cambridge University Press. IPCC. (2013). Summary for Policymakers. In T. Stocker et al. (Eds.), Climate change 2013: The physical science basis. contribution of working group i to the fifth assessment report of the intergovernmental panel on climate change. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press. Jaynes, E. T., & Bretthorst, G. L. (2003). Probability theory : The logic of science. Cambridge University Press. Klayman, J. (1995). Varieties of confirmation. The psychology of learning and motivation, 32, 385–418. Kruger, J., & Dunning, D. (1999, December). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments.

9

Journal of personality and social psychology, 77(6), 1121-1134. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10626367 Leiserowitz, A., Maibach, E. W., Roser-Renouf, C., & Hmielowski, J. D. (2011). Politic & global warming: Democrats, Republicans, Independents, and the Tea Party (Tech. Rep.). Yale University and George Mason University. Lewandowsky, S., Cook, J., Oberauer, K., & Hubble-Marriott, M. (2013). Recursive fury : Conspiracist ideation in the blogosphere in response to research on conspiracist ideation. Frontiers in Psychology, 4(73). doi: 10.3389/fpsyg.2013.00073 Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012, September). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi: 10.1177/1529100612451018 Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2013, January). The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS ONE, 8(10), e75637. doi: 10.1371/journal.pone.0075637 Lewandowsky, S., Griffiths, T. L., & Kalish, M. L. (2009, August). The Wisdom of Individuals: Exploring People’s Knowledge About Everyday Events Using Iterated Learning. Cognitive science, 33(6), 969–98. doi: 10.1111/j.15516709.2009.01045.x Lewandowsky, S., Oberauer, K., & Gignac, G. E. (2013). NASA Faked the Moon Landing–Therefore, (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science. Psychological Science, 24(5), 622-633. doi: 10.1177/0956797612457686 Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased Assimilation and Attitude Polarization : The Effects of Prior Theories on Subsequently Considered Evidence. Journal of personality and social psychology, 37(11), 2098–2109. McCright, A. M., & Dunlap, R. E. (2010, May). Anti-reflexivity: The American Conservative Movement’s Success in Undermining Climate Science and Policy. Theory, Culture & Society, 27(2-3), 100–133. doi: 10.1177/0263276409356001 McCright, A. M., & Dunlap, R. E. (2011, October). Cool dudes: The denial of climate change among conservative white males in the United States. Global Environmental Change, 21(4), 1163–1172. doi: 10.1016/j.gloenvcha.2011.06.003 Mckewon, E. (2012). Talking points ammo: The use of neoliberal think tank fantasy themes to delegitimise scientific knowledge of climate change in Australian newspapers. Journalism Studies, 13(2), 277–297. Mercier, H., & Sperber, D. (2011, April). Why do humans reason? Arguments for an argumentative theory. The Behavioral and brain sciences, 34(2), 57–74; discussion 74–111. doi: 10.1017/S0140525X10000968 Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. Bloomsbury US. Shermer, M. (2002). Why People Believe Weird Things. Macmillan. Smith, N., & Leiserowitz, A. (2012, June). The Rise of Global Warming Skepticism: Exploring Affective Image Associations in the United States Over Time. Risk Analysis, 32(6), 1021–1032. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/22486296 doi: 10.1111/j.15396924.2012.01801.x 10

Wilkins, J. S. (2009, April). Are creationists rational? Synthese, 178(2), 207–218. doi: 10.1007/s11229-009-9544-6

11