The Effect of Institutions on Behavior and Brain Activity - CiteSeerX

2 downloads 0 Views 283KB Size Report
heads or tails, but they do not know whether the report is truthful. .... prompts in order to promote the illusion that another experimenter is actually flipping.
The Effect of Institutions on Behavior and Brain Activity: Insights from EEG and Timed-Response Experiments *

Cheryl Boudreau University of California, Davis Department of Political Science One Shields Avenue Davis, CA 95616 [email protected]

Seana Coulson University of California, San Diego Department of Cognitive Science 9500 Gilman Drive La Jolla, CA 92093 [email protected]

Mathew D. McCubbins University of California, San Diego Department of Political Science 9500 Gilman Drive, Mail Code 0521 La Jolla, CA 92093-0521 [email protected]

*We thank the National Science Foundation, Grant #SES-0616904, the Kavli Institute for Brain and Mind, and the Chancellor’s Associates Chair VIII at UC San Diego for providing financial support for these experiments. We are also grateful to Scott MacKenzie and participants in the symposium on “Neuroscience, Law, Economics, and Politics” at the University of Southern California School of Law for helpful comments on an earlier draft of this paper.

Abstract While much is known about citizens’ decisions to trust others, much less is known about the cognitive mechanisms that underlie these decisions. Thus, we analyze both the behavior and brain activity associated with trust by replicating well known experiments on trust with electroencephalograph (EEG) and timed-response technology. Although our behavioral results are consistent with previous research, our EEG results reveal something about trust that we do not learn from simply observing subjects’ decisions and reaction times. Specifically, they demonstrate that subjects process information differently when it comes from someone who is trustworthy by virtue of sharing common interests with them versus someone who is made trustworthy by an external institution. This processing difference exists even though subjects are equally likely to base their decisions upon the statements of these two trustworthy individuals and even though they take the same amount of time to make their decisions. Given these differences between subjects’ behavior and brain activity, it appears that recording EEGs adds a new dimension to our understanding of subjects’ decisions to trust the statements of others.

1 In many political, legal, and economic contexts, citizens must make decisions for which they are not fully informed. As a result, they must often rely upon the statements of individuals that they do not personally know and that they may (or may not) trust when making their decisions (Lupia and McCubbins 1998; Lupia 1992, 1994; Mondak 1993; Sniderman, Brody, and Tetlock 1991; Druckman 2001a, 2001b, 2001c; Boudreau 2006). For example, when choosing among different candidates for office, uninformed citizens may rely upon the statements of politicians and upon the endorsements of interest groups. Or, when deciding a question at trial, jurors must rely upon the statements of attorneys and their witnesses. Similarly, when choosing among products, consumers often rely upon information provided by the sellers themselves and upon the opinions of endorsers, such as Consumer Reports and the Better Business Bureau. Given the many contexts in which citizens must decide whether to trust another individual, it is not surprising that political scientists, economists, psychologists, and many others study factors that increase or decrease trust, as well as the behavioral consequences of trust and distrust. What is surprising, however, is that while scholars who study trust in other disciplines have begun to complement their behavioral studies of trust with neuroscientific and biological analyses (McCabe et al. 2001; Zak et al. 2004, 2005; Kosfeld et al. 2005; King-Casas et al. 2005), political scientists have yet to follow suit. This is surprising not only because political scientists have always been at the forefront of debates about trust, but also because it seems at odds with our discipline’s emphasis on the benefits of using multiple methods to study particular political phenomena (Theiss-Morse et al. 1991; King, Keohane, and Verba 1994; McDermott 2002; Laitin 2002, 2003; Cacioppo and Visser 2003). Indeed, just as it is important to combine experimental analyses with survey results and large-N studies with case studies, it is

2 also important for scholars to combine behavioral experiments with analyses of subjects’ brain activity in order to understand both behavioral outcomes and the cognitive processes underlying them (Cacioppo et al. 2003; Cacioppo and Visser 2003; Albertson and Brehm 2003). Further, given that political scientists have already begun to use neuroscientific and biological analyses to shed new light on other political phenomena (such as sophistication, turnout, affect, and conflict; see, e.g., Wahlke and Lodge 1972; Morris, Squires, Taber, and Lodge 2003; Lieberman, Schreiber, and Ochsner 2003; Alford, Funk, and Hibbing 2005; Johnson et al. 2006; Wilson, Stevenson, and Potts 2006; Alford and Hibbing 2006; Mutz 2007; Fowler and Dawes 2008; Cesarini et al. 2008; Dickson, Scheve, and Stanley 2008; Boudreau, McCubbins, and Coulson 2008), it is important that we do the same in our analyses of trust. Our study takes one step in this direction by analyzing both the behavior and brain activity associated with trust. Specifically, we build upon well known behavioral experiments on trust (namely, those of Lupia and McCubbins 1998) that show that subjects’ decisions to trust another individual’s statements depend upon the perceived trustworthiness of that other individual (dubbed “the reporter” in their experiments and throughout this paper). For example, Lupia and McCubbins show that subjects who perceive that the reporter shares common interests with them are significantly more likely to trust the reporter’s statements than are subjects who perceive that the reporter’s interests conflict with their own. When the reporter has conflicting interests with subjects, but is made trustworthy by an institution (such as a penalty for lying that is large enough to ensure that the reporter has a dominant strategy to tell the truth), subjects trust the reporter’s statements at a rate that is similar to the rate at which they trust the reporter’s statements when they know that the reporter shares common interests with them. From these

3 results, Lupia and McCubbins conclude that institutions can substitute for common interests because they, too, induce citizens to trust the reporter’s statements. The question that Lupia and McCubbins’s (1998) experiments leave open is: Do brain activity and reaction time measures also indicate that subjects view in the same way information coming from a trustworthy individual (i.e., one who shares common interests with them) versus an individual who is otherwise untrustworthy, but is made trustworthy by an external institution? This question is an important one. Indeed, if subjects’ brain activity is different when they receive information from these two types of trustworthy individuals (even though their decisions and reaction times are similar), then this suggests that political scientists who seek to understand trust may not necessarily get the whole story if they only observe subjects’ behavior. More broadly, such a finding would also have implications for research on persuasion, as it would indicate that the manner in which a source is made trustworthy (and not just trustworthiness itself) affects how citizens process information from that source. On the other hand, if subjects’ decisions, reaction times, and brain activity are similar when they receive information from these two types of trustworthy individuals, then this indicates that institutions induce not only the same behavior as common interests, but also the same cognitive processing of information. To address this open question, we replicate Lupia and McCubbins’s experiments with electroencephalograph (EEG) and timed-response technology. Like Lupia and McCubbins, we vary the perceived trustworthiness of the reporter by manipulating the interests of the reporter, as well as the institutional context in which the reporter makes his or her statement. Specifically, in the Common Interests condition, the reporter’s interests are aligned with those of subjects; that is, the reporter benefits when subjects make welfare-improving choices. In the Conflicting Interests condition, the reporter’s interests conflict with those of subjects; that is, the reporter

4 benefits when subjects do not make welfare-improving choices. In the Penalty for Lying condition, the reporter’s interests conflict with those of subjects, but the reporter is penalized every time he or she makes a false statement to subjects. Importantly, in our experiments the penalty for lying is large enough to ensure that the reporter has a dominant strategy to tell the truth.1 Although our behavioral results are consistent with Lupia and McCubbins’s (1998) conclusion that institutions substitute for common interests, our EEG results reveal something about trust that we do not learn from simply observing subjects’ decisions and reaction times. That is, they demonstrate that even though the reporter is, theoretically and behaviorally, equally trustworthy in the Common Interests and Penalty for Lying conditions, subjects process information quite differently when it comes from a reporter who is trustworthy by virtue of sharing common interests with them versus a reporter who is made trustworthy by an external institution. Indeed, across a wide range of cognitive responses, subjects’ brain activity is very different in the Common Interests condition, relative to both the Penalty for Lying and Conflicting Interests conditions. Interestingly, this processing difference exists even though subjects are equally likely to base their decisions upon the reporters’ reports in the Common Interests and Penalty for Lying conditions and even though they take the same amount of time to make their decisions in these conditions. Given this difference between subjects’ behavior and brain activity, it appears that recording subjects’ brain activity can potentially add a new dimension to our understanding of subjects’ decisions to trust the statements of others.

1

Following Lupia and McCubbins (1998), we vary the presence of common versus conflicting

interests, as well as the penalty for lying, by manipulating the financial incentives of subjects.

5 This paper proceeds as follows. We begin with a review of the literature on trust. We then describe the subjects, research design, and equipment that we use in our experiments. Next, we propose testable hypotheses. We then present our experimental results on subjects’ decisions, reaction times, and brain activity. We conclude with a discussion of the substantive and methodological implications that our research has for debates about trust and other political phenomena. Specifically, we emphasize that our experiments show the value of tying together both behavioral results and brain data in analyses of political phenomena, such as trust. Although our study represents only a first step in this endeavor, we emphasize that future research on trust (and other topics of interest to political scientists) can potentially benefit from simultaneously assessing behavior and brain activity.

Existing Literature Our research is motivated by the vast, interdisciplinary literature on trust. Indeed, ever since social scientists first documented significant declines in political and social trust (Easton 1965; Gamson 1968), scholars from a wide range of disciplines (most notably, political science, economics, psychology, neuroscience, sociology, and law) have assessed individual and systemic factors that influence trust (for a survey, see Levi and Stoker 2000). For example, many scholars suggest that characteristics of officeholders affect citizens’ levels of trust (Citrin 1974; Abramson and Finifter 1981; Citrin and Green 1986; Miller and Borrelli 1991; Hetherington 1998, 1999), while others suggest that characteristics of citizens themselves (such as their levels of dissatisfaction with Congress or how extremist or centrist they are) determine the extent to which they trust their government (Miller 1974; Feldman 1983; Hibbing and Theiss-Morse 1995). Still other scholars suggest that systemic factors, such as political scandals, institutions,

6 and the rise of negative political messages on television, explain levels of trust among citizens (Miller et al. 1979; Patterson 1993; Capella and Jamieson 1997; Lupia and McCubbins 1998; Boudreau 2006). The body of literature described above goes a long way toward identifying factors that influence citizens’ levels of trust, and it has also spawned fruitful research on the behavioral consequences of trust or distrust (see, e.g., Rosenstone et al. 1984; Levi 1988; Sigelman et al. 1992; Atkeson et al. 1996; Hetherington 1999; Scholz and Lubell 1998a, 1998b). That said, much of this literature does not provide a general, theoretical account of the conditions under which citizens, first, trust the statements of individuals personally unknown to them, and second, base their decisions upon these statements, as they must often do when making political, legal, and economic decisions. In an effort to identify these conditions for trust, Lupia and McCubbins (1998) and Crawford and Sobel (1982) develop game theoretic models that yield the following equilibrium predictions: 1)

Common interests between a knowledgeable reporter and citizens induce citizens to trust the reporter’s statements and base their choices upon them.

2)

Conflicting interests between a knowledgeable reporter and citizens induce citizens to ignore the reporter’s statements and make their decisions on their own.

3)

Institutions can sometimes induce citizens to trust the reporter’s statements, even when the reporter’s interests conflict with those of citizens. For example, a sufficiently large penalty for lying can remove a reporter’s incentive to lie, and therefore lead citizens to trust the reporter’s statements.

7 Lupia and McCubbins test these predictions in a series of behavioral experiments, and their results support their predictions. It is this literature on trust, in general, and on the conditions under which citizens trust and base their decisions upon the statements of others that we build on in this study. Specifically, we replicate Lupia and McCubbins’s (1998) experiments, but we record not only subjects’ decisions to trust (or not trust) the reporter’s statements, but also subjects’ reaction times and EEGs as they make their decisions. In this way, we are able to assess not only whether subjects’ behavior is similar when they receive information from a reporter who shares common interests with them and a reporter who is made trustworthy by an external institution, but also whether subjects’ brain activity is similar when they receive information from these two different types of trustworthy reporters. We pay particular attention to different facets of subjects’ behavior (i.e., their decisions and reaction times) and brain activity across conditions because previous research does not assess whether subjects’ reaction times or brain activity differs when receiving information from a reporter who shares common interests with them and a reporter who is made trustworthy by a penalty for lying. For example, it is possible that subjects take longer to make decisions when they receive information from a reporter who is made trustworthy by a penalty for lying, relative to when they receive information from a reporter who is trustworthy by virtue of sharing common interests with them. It is also possible that subjects’ brain activity will be different when they receive information from these two types of trustworthy reporters, even though they make similar decisions with both reporters. Indeed, previous research in cognitive neuroscience shows that similar behavioral outcomes can be subserved by different neural mechanisms (Grady et al. 1992; Reuter-Lorenz et al. 2000). Thus, we address the similarity in subjects’ decisions,

8 reaction times, and brain activity when they receive information from a reporter with common interests, a reporter with conflicting interests, and a reporter who is made trustworthy by an external institution, namely a penalty for lying. In so doing, we contribute not only to the literature on trust, but also to the growing literature in political science that uses neuroimaging to analyze political cognition (see, e.g., Morris, Squires, Taber, and Lodge 2003; Lieberman, Schreiber, and Ochsner 2003; Wilson, Stevenson, and Potts 2006; Alford and Hibbing 2006; Dickson, Scheve, and Stanley 2008) and to the economics, psychology, and cognitive science literatures that assess the neural correlates of trust (see, e.g., McCabe et al. 2001; Zak et al. 2004; de Quervain et al. 2004; Delgado et al. 2005; Kosfeld et al. 2005; King-Casas et al. 2005).

Research Design Subjects A total of 59 healthy adults from the University of California, Davis and the University of California, San Diego communities (37 men), aged 18 to 28, participated in our experiments. All subjects were paid based on the decisions that they made in our experiment, and they earned, on average, between 27 and 60 dollars. We recorded behavioral responses and reaction times from all 59 subjects, and we recorded the EEGs of 12 of these subjects.2

2

Although 12 subjects may seem like a small sample size and we would like to have had a larger

sample, it is important to note that each subject makes 150 decisions in our experiment. Because of the large number of decisions that EEG experiments require (Luck 2005), it is standard in both cognitive neuroscience and in the growing number EEG experiments in political science to publish results from studies that use between 7 and 16 subjects (see, e.g., Morris, Squires, Taber,

9 Procedure Following Lupia and McCubbins (1998), we ask subjects in our experiment to predict the outcomes of coin tosses that they do not observe. We tell subjects that they earn 50 cents for each correct prediction that they make, and nothing when they make an incorrect prediction or fail to make a prediction.3 We also inform subjects that another subject in another room (dubbed “the reporter”) observes each coin toss outcome and then sends a report to them via computer about whether the coin landed on heads or tails. Importantly, we tell subjects that the reporter can either lie about the coin toss outcome or tell the truth. Thus, before subjects make a prediction about each coin toss, they observe the reporter’s report of whether the coin landed on heads or tails, but they do not know whether the report is truthful. As in Lupia and McCubbins (1998), the key factor that we manipulate is the perceived trustworthiness of the reporter, and we do this by varying the interests of the reporter, as well as the institutional context in which the reporter sends his or her report. Although we tell subjects that there is another person acting as the reporter in another room, the reporter’s reports of heads or tails in each condition are actually based upon Lupia and McCubbins’s (1998) experimental results and are, thus, programmed into the computer before the experiment begins. That said, we take many precautions to ensure that subjects believe that and Lodge 2003; Wilson, Stevenson, and Potts 2006; Squires et al. 1976; Bledowski et al. 2004; Dien et al. 2004; Gonsalvez 2007). 3

We begin our experiments by asking subjects to predict whether several practice coin tosses

land on heads or tails, and we pay subjects 50 cents for each correct prediction that they make. The purpose of these practice predictions is to ensure that subjects understand that they earn money based upon the choices they make in the experiment.

10 there is another person acting as the reporter. For example, an experimenter leaves the experimental laboratory between conditions, ostensively to make sure the reporter is ready to begin the next set of trials. Also, the amount of time that it takes for the reporter’s reports to appear on subjects’ computer screens is long enough for us to credibly state that another subject is in another room sending reports via computer. To further promote the illusion that another person is acting as the reporter, we randomly vary the amount of time that it takes for the reporters’ reports to appear on subjects’ computer screens in some experiments. At the end of our experiments, none of the subjects expressed skepticism regarding the existence of a real reporter.

Sequence of Events We begin the experiment by reading the instructions for the Common Interests condition to subjects. That is, we ask subjects to predict the outcome of an unseen coin toss after receiving a message from the reporter. We inform subjects that, in this condition, both they and the reporter earn 50 cents every time they, the subjects, correctly predict the coin toss outcome, and nothing if they predict incorrectly or fail to respond before the onset of the next coin toss. We explicitly remind subjects that it is entirely the reporter’s decision as to whether he or she sends a true or a false report via the computer. To ensure that subjects fully understand the instructions for the Common Interests condition, we give them a quiz that asks them to say how much money the reporter earns under various circumstances. To motivate performance on the quiz, we pay subjects 25 cents for each quiz question they answer correctly. When we are sure that subjects understand how the reporter earns money in the Common Interests condition, 10 experimental trials begin.

11 Following the initial Common Interests trials, we read the instructions for the Conflicting Interests condition to subjects. Specifically, we tell subjects that their task is the same as in the previous block of trials – to predict the outcome of an unseen coin toss after receiving a message from a reporter. We tell subjects that while they themselves still earn 50 cents for each correctly predicted coin toss and nothing for incorrect predictions, the reporter now earns 50 cents for each incorrect prediction that subjects make. We then give subjects a brief quiz on how much money the reporter earns under various circumstances, and we pay them 25 cents for each correctly answered quiz question. When we are sure that subjects understand how the reporter earns money in this condition, 10 Conflicting Interests trials begin. Following the initial Conflicting Interests trials, we read the instructions for the Penalty for Lying condition to subjects. We tell subjects that as in the previous (Conflicting Interests) trials, the reporter earns 50 cents for each of the subject’s incorrect predictions, while the subject earns 50 cents for each correct prediction. We also tell subjects that every time the reporter sends a false report, we deduct $1 from the reporter’s experimental earnings. We then give subjects a brief quiz on how much money the reporter earns under various circumstances, and we pay them 25 cents for each correctly answered quiz question. Because we quiz subjects on how the reporter earns money and correct their quizzes in front of them, they know that the $1 penalty is large enough to ensure that the reporter always has an incentive to tell the truth about the coin toss outcome. When we are sure that subjects understand how the reporter earns money in this condition, 10 Penalty for Lying trials begin. Once subjects complete 10 trials for all three conditions, we collect data for additional coin tosses in each of our three conditions. In experiments where we record only behavioral responses and reaction times, subjects complete an additional block of 10 coin tosses in each of

12 our three conditions. Thus, subjects in these experiments predict the outcomes of a total of 20 coin tosses in each condition. In experiments where we record subjects’ EEGs, subjects complete an additional block of 40 trials in each of our three conditions. Thus, subjects in these experiments predict the outcomes of a total of 50 coin tosses in each condition. We include more trials in our EEG experiments because a large number of trials is required to measure particular cognitive responses accurately (Luck 2005).4 Further, in order to control for learning and arousal effects in both experiments, half of the subjects complete the second block of trials in order 1 (Common Interests, Conflicting Interests, Penalty for Lying), while the other half complete the second block of trials in order 2 (Penalty for Lying, Conflicting Interests, and Common Interests). In all experiments, subjects sit in a comfortable chair in front of a computer screen. As shown in Figure 1, all trials begin with the text “(Tossing Coin)” appearing in the center of a 19inch color monitor for 3 seconds. Next, the text “Showing outcome to reporter” is displayed on the monitor for 5 seconds. The text “The reporter says” then appears for 5 seconds. These first three prompts appear on the screen for longer amounts of time (a total of 13 seconds) than subsequent prompts in order to promote the illusion that another experimenter is actually flipping a coin in another room and that another subject (acting as the reporter) is actually sending a report.5 The reporter’s report is comprised of either a 1 second presentation of the word 4

As Luck (2005, p. 23) states, “ERP effects often require fifty, a hundred, or even a thousand

trials per subject in each condition.” 5

In experiments where we record subjects’ EEGs, we use variable inter-stimulus intervals (ISI)

to further promote the illusion that another experimenter is actually flipping a coin in another room and that another subject (acting as the reporter) is actually sending a report. We use

13 “HEADS” or the word “TAILS.” We give subjects 6 seconds from the onset of the “HEADS/TAILS” prompt to make a prediction about the coin toss outcome.6 Subjects make their predictions via a button press in which a left hand response indicates HEADS and a right hand response indicates TAILS. We repeat this sequence for each coin toss in our experiment, and on each coin toss, we record the amount of time that elapses between the presentation of the “HEADS/TAILS” prompt and subjects’ predictions. We do not tell subjects that their predictions are being timed, and we do not give them any feedback until the very end of the variable ISI in our EEG experiments because of the larger number of trials that subjects complete; that is, because subjects predict the outcomes of 150 coin tosses, they will likely notice if the experimenter always takes the same amount of time to flip the coin or if the reporter always takes the same amount of time to send his or her report. Thus, we randomly vary the amount of time that it takes for the coin to be tossed and for the reporter to be shown the coin toss outcome. Specifically, the “(Tossing Coin)” and “Showing outcome to reporter” prompts are each followed by a variable ISI that ranges from 4-1000 ms. 6

The stimulus presentation in experiments where we record subjects’ EEGs contains other slight

differences. Specifically, in our EEG experiments, the text “The Reporter Says…” appears for 1 second, followed by 500 ms of blank screen. The reporter’s report in these experiments is comprised of either a 500 ms presentation of the word “HEADS” or the word “TAILS,” followed by a 300 ms ISI. The reporter’s report of heads or tails is followed by a prompt that reads, “Your Guess?” for 500 ms. In these experiments, we give subjects 4500 ms from the onset of the “Your Guess” prompt to make their prediction. We repeat this sequence for each of the 150 coin tosses in our EEG experiments. On each coin toss, we record the amount of time that elapses between the presentation of the “Your Guess?” prompt and subjects’ predictions.

14 experiment. We also tell subjects that the reporter does not observe their predictions about the coin toss outcomes.

Figure 1. The Screens that Subjects View for Each Coin Toss

Electroencephalogram Recording and Analysis We record subjects’ brain activity from 29 tin electrodes that are arranged in an expanded version of the 10-20 system atop subjects’ scalps (Nuwer et al. 1998).7 In a nutshell, these

7

The EEG was sampled at 250 Hz and referenced to the left mastoid. We monitored blinks and

eye movements via an electrode beneath the right eye and one electrode at each of the outer canthi (the electrooculogram, EOG). We screened the data for blinks and eye movements (which

15 electrodes record electrical activity in the brain due to postsynaptic potentials (i.e., the voltages that arise when neurotransmitters bind to receptors) occurring in the dendrites and cell bodies of neurons. Because postsynaptic potentials typically last from tens to hundreds of milliseconds and occur instantaneously, it is possible to record them from the scalp. That said, because electrical activity in response to a particular event (i.e., stimulus) is quite small, the signal must be enhanced by averaging over a large number of trials. Specifically, for each subject in an EEG experiment, it is necessary to 1) repeat the event of interest (which in our experiment is the reporter’s report) many times in each experimental condition and 2) time-lock segments of the EEG to that event so that those segments can be averaged together for each experimental condition. This averaging process reveals each subject’s brain’s electrical response to the event, which is known as an event related potential (ERP). Once this averaging process is completed for each subject, then all subjects’ ERPs are averaged together to produce what is known as a grand average ERP. It is this grand average ERP that is used in statistical analyses (Luck 2005). In our study, we time-lock subjects’ EEGs to the onset of the reporter’s report in each experimental condition (i.e., Common Interests, Conflicting Interests, and Penalty for Lying). We assess subjects’ ERPs by measuring the mean amplitude of the waveform in intervals that capture various cognitive components of interest. As shown in Figure 2, we analyze the mean amplitudes that we observe in each experimental condition by using three sorts of repeated measures ANOVAs: 1) midline analyses involving measurements taken from channels FPz, distort the data), and trials that contained these artifacts were removed from our analysis, as is standard in cognitive neuroscience. The average artifact rejection rate was 31% (se = 17%). The EEG and EOG were recorded and amplified with a set of 32 bioamplifiers from SA Instruments (San Diego, CA), with half-amplitude cut-offs at 0.01 and 40 Hz and digitized on a PC.

16 FCz, Cz, CPz, Pz, and Oz, 2) medial analyses involving measurements taken from channels FP1, F3, FC3, C3, CP3, P3, O1, and their left hemisphere counterparts, and 3) lateral analyses involving measurements from channels F7, FT7, TP7, T5, and their left hemisphere counterparts (see Boudreau, McCubbins and Coulson 2008 for additional details on the analysis).

Figure 2. Locations of the EEG Sensors Atop Subjects’ Scalps

Hypotheses Lupia and McCubbins’s (1998) theory and experiments suggest that particular institutions (such as a sufficiently large penalty for lying) can substitute for common interests. Based on

17 their results, we predict that subjects will be equally likely to trust the reporter’s statements (and, thus, base their predictions upon them) when the reporter shares common interests with them and when the reporter is made trustworthy by a penalty for lying. Following Lupia and McCubbins, we also predict that when the reporter’s interests conflict with those of subjects, subjects will not trust the reporter’s statements and, thus, not base their predictions upon them. As for subjects’ reaction times and brain activity in the three conditions, Lupia and McCubbins (1998) do not offer predictions for these other measures. That said, based on their theoretical and experimental results suggesting that institutions can substitute for common interests, we would expect to observe similar reaction times in the Common Interests and Penalty for Lying conditions, as well as similar brain activity in these two conditions. We would also expect subjects’ reaction times and brain activity to be different in the Conflicting Interests condition (relative to the Common Interests and Penalty for Lying conditions), as this is the one condition where the reporter is not trustworthy. Thus, our null hypothesis is: H0: Subjects’ behavior (i.e. their decisions and reaction times) and brain activity are similar in the Common Interests and Penalty for Lying conditions and different from their behavior and brain activity in the Conflicting Interests condition.

If we are unable to reject the null hypothesis, then our study would suggest several substantive and methodological conclusions. First, it would suggest that particular institutions are perfect substitutes for common interests; that is, they induce not only the same behavior, but also the same reaction times and cognitive processing of information. Second, it may suggest that observing subjects’ brain activity does not add much to our understanding of subjects’ decisions to trust the statements of others. Indeed, if subjects’ brain activity simply mirrors their

18 behavior, then one might question whether there is any value added to using this technology. Stated differently, one might ask why political scientists should become trained in recording and interpreting subjects’ brain activity if it simply tells us the same thing that subjects’ decisions and reaction times tell us. Alternatively, if subjects’ brain activity does differ from their behavior, then this suggests that political scientists who seek to understand trust (and other political phenomena) may not necessarily get the whole story if they only observe subjects’ decisions and reaction times. For example, in the context of our experiments, it is possible that subjects process information differently when it comes from a reporter who is trustworthy by virtue of sharing common interests with them versus a reporter who is made trustworthy by an external institution. This difference in the way that subjects process information from these two types of trustworthy reporters may exist even if they are equally likely to base their decisions upon these reporters’ reports and even if they take the same amount of time to make their decisions with both reporters. If subjects’ behavior and brain activity shows this pattern, then we can reject the null hypothesis in favor of an alternative hypothesis: H1: Subjects’ behavior (i.e. their decisions and reaction times) and brain activity are different in the Common Interests and Penalty for Lying conditions.

Behavioral Results: Subjects’ Decisions We assess the extent to which subjects trust the reporter’s reports by examining the percentage of times that their predictions are the same as what the reporter reports in each experimental condition (i.e. what percentage of the time do subjects predict “heads” when the reporter reports “heads” and predict “tails” when the reporter reports “tails” in each experimental

19 condition). We use one sample t-tests to determine whether subjects’ predictions match what the reporter reports more than 50% of the time. We use a 50% baseline because we toss a fair coin; thus, if subjects are simply choosing heads or tails randomly, then we would expect their predictions to match the reporter’s reports 50% of the time. If subjects trust the reporter’s reports, then we should observe their predictions matching the reporter’s reports more than 50% of the time. As shown in Figure 3, when subjects know that the reporter shares common interests with them, their predictions match what the reporter reports 93% of the time, a figure that is significantly greater than our 50% baseline (t = 24.01, p < 0.001). Similarly, in the Penalty for Lying condition, subjects’ predictions match the reporter’s reports 93% of the time, which is also significantly greater than 50% (t = 23.63, p < 0.001). However, in the Conflicting Interests condition, subjects’ predictions match what the reporter reports only 54% of the time, which is not significantly different from 50% (t = 1.2, p = 0.23). These results are consistent with those of Lupia and McCubbins (1998) and suggest that subjects are equally likely to trust the statements of a reporter who shares common interests with them and a reporter who is made trustworthy by a penalty for lying.

20 Figure 3. Percentage of Predictions that Match the Reporter’s Report

100

Common Interests Penalty for Lying

90

% Matching Reporter

80 70 60 50

Random

Conflicting Interests

40 30 20 10 0

Behavioral Results: Reaction Times Our reaction time results are consistent with the results described above. That is, subjects in the Common Interests and Penalty for Lying conditions take similar amounts of time to make their predictions after receiving the reporter’s report. Further, subjects in the Conflicting Interests condition are slower to make their predictions than are subjects in the other two conditions. Specifically, subjects in the Common Interests condition take, on average, 1191 milliseconds to make their predictions of “heads” or “tails,” while subjects in the Penalty for Lying condition take, on average, 1157 milliseconds to make their predictions. This difference is not statistically significant (t = 0.41). Subjects in the Conflicting Interests condition, however,

21 take, on average, 1318 milliseconds to make their predictions, which is significantly slower than subjects in Penalty for Lying and Common Interests conditions (when compared to the Penalty for Lying condition, t = 1.84, p < 0.05; when compared to the Common Interests condition, t = 1.44, p < 0.1).8

ERP Results Unlike our behavioral results, our ERP results demonstrate that subjects’ brain activity is quite different when they receive information from a reporter who shares common interests with them versus a reporter who is made trustworthy by a penalty for lying. Indeed, across a wide range of cognitive responses to the reporter’s reports, we consistently find that subjects’ brain activity is more similar in the Penalty for Lying and Conflicting Interests conditions than it is in the Penalty for Lying and Common Interests conditions. At a minimum, these results indicate that subjects process information differently when it comes from an individual who shares common interests with them, relative to when it comes from an individual whose interests 8

The reaction time results that we report do not include data from the 12 subjects whose EEGs

we recorded. Although we recorded the reaction times of these subjects, we decided not to combine them with the reaction times of the other 47 subjects because of slight differences in the stimulus presentation in our EEG experiments. Specifically, in our EEG experiments, subjects had to wait for a “Your guess?” prompt to appear on the computer screen before they could make their predictions. This, needless to say, artificially delayed their responses. Thus, while the pattern of results for these 12 subjects is consistent with the reaction time results we report in the text, we did not combine them with the results of the other 47 subjects, who could register their predictions immediately after seeing the reporter’s report.

22 conflict with their own, but who is made trustworthy by an external institution. More broadly, these results may suggest that subjects’ brains treat reports as more informative when the reporter shares common interests with them (Donchin and Coles 1988). The brain activity that we analyze is shown in Figure 4. (Note that, by convention, negative is plotted upward in all figures.) Specifically, grand average ERPs to the reporter’s reports in the Common Interests condition are plotted with a straight line, grand average ERPs to reports in the Conflicting Interests condition are plotted with a dotted line, and grand average ERPs to reports in the Penalty for Lying condition are plotted with a dashed line. Prominent portions of the waveform include a negativity peaking approximately 100 ms after the onset of the reporter’s report (the AN1), a positivity peaking approximately 200 ms after the onset of the reporter’s report (the P2), a more broadly distributed positivity peaking at approximately 500 ms (the P3), a negative-going peak at 600 ms (the medial negativity), and subsequent slow wave activity we refer to as the late positive complex (LPC).

With only one exception, each of these

responses indicates that subjects’ brain activity is different in the Common Interests condition, relative to both the Penalty for Lying and Conflicting Interest conditions. That we observe differences in subjects’ brain activity across these three conditions is remarkable because the visually presented stimuli (shown in Figure 1) are identical in each condition. It is also interesting that subjects’ brain activity is different in the Common Interests and Penalty for Lying conditions even though both reporters are trustworthy (albeit for different reasons) in these two conditions.

23 Figure 4. Grand average ERPs recorded from the midline frontal (Fz) and parietal (Pz) electrode sites. ERPs are time-locked to the reporter’s reports in each of the three experimental conditions. Negative voltage is plotted upwards.

24 Anterior N1 Component We assess the anterior N1 (AN1) component by measuring the mean amplitude of ERPs elicited between 80 and 110 milliseconds after the onset of the reporter’s report.9 In this portion of the waveform, ERPs to reports in the Common Interests condition are less negative than ERPs to reports in the other two conditions (see Table 1 and Figure 4). Measured at medial electrode sites, the AN1 in the Common Interests condition is -0.3 microvolts, versus -1.2 microvolts in the Conflicting Interests condition and -0.7 microvolts in the Penalty for Lying condition.10

9

The AN1 is a negativity peaking over fronto-central electrodes approximately 100 ms after the

onset of a visually presented stimulus (Luck 1995). 10

A significant interaction between the experimental condition, hemisphere, and the anterior-

posterior factor results because the N1 response is largest at the anterior medial electrode sites and is slightly larger over the left hemisphere (hence, it is called the anterior N1, or AN1).

25 Table 1. Mean Amplitude Analysis of ERP Components (Boldface indicates a significant difference, relative to the other two conditions)11 ERP Component by Experimental Condition AN1 Common Interests Conflicting Interests Penalty for Lying

Microvolts

-0.3 -1.2 -0.7

P2 Common Interests Conflicting Interests Penalty for Lying

4.9 3.63 3.26

Common Interests Conflicting Interests Penalty for Lying

3.29 1.62 2.27

P3

Medial Negativity Common Interests Conflicting Interests Penalty for Lying

2.05 0.21 1.44

LPC Common Interests Conflicting Interests Penalty for Lying

3.18 1.92 1.73

Our result for the AN1 is remarkable because it shows that our manipulation of the trustworthiness of the reporter affected the amplitude of ERP waveforms within 100 ms of the appearance of the reporter’s report. Although the visually presented reports are identical in each of our three conditions, the AN1 is larger in both the Conflicting Interests and the Penalty for Lying conditions than it is in the Common Interests condition. At a minimum, this difference in the size of the AN1 indicates that subjects’ brains process the reporter’s reports differently in the 11

The numbers that we report in this table are the ones that were significant in our analysis.

26 Common Interests condition, relative to the other two conditions. More broadly, this processing difference may result from greater anticipatory activity related to response preparation in the Conflicting Interests and Penalty for Lying conditions (see Vogel and Luck 2000). Thus, this result may suggest that subjects in the Common Interests condition more fully process the reporter’s report before preparing their response.

P2 Component We assess the P2 component by measuring the mean amplitude of ERPs between 180 and 250 milliseconds after the onset of the reporter’s report. As shown in Table 1, repeated measures ANOVA reveals that ERPs at midline electrode sites are most positive in the Common Interests condition (4.9 microvolts), compared to 3.63 microvolts in the Conflicting Interests condition, and 3.26 microvolts in the Penalty for Lying condition. We observe the same pattern of results at the lateral electrode sites. The enhanced P2 in the Common Interests condition is thus consistent with our claim that subjects’ brains process the reporter’s reports differently in the Common Interests condition, relative to both the Penalty for Lying and Conflicting Interests conditions. As for the interpretation of this result, it is important to note that the functional significance of the P2 component is not completely agreed upon. That said, the P2 has been argued to reflect some aspect of high-level perceptual processing (Kranczioch, Debner, and Engel 2003). Others have suggested that the P2 is primarily sensitive to the relevance of perceptual information and consequently argued that the P2 indexes the integration of motivational and perceptual information (Potts 2004; Potts, Patel, and Azam 2004; Potts, Martin, Burton, and Montague 2006). Thus, a broader interpretation of our results is that the enhanced P2

27 in the Common Interests condition may indicate that the reporter’s report is more perceptually and motivationally salient in the Common Interests condition, relative to the other two conditions.

P3 Component and Late Positive Complex We assess the P3 component by measuring the mean amplitude of ERPs from 400 to 600 milliseconds after the onset of the reporter’s report. Our analysis suggests that ERPs to reports in the Common Interests condition are more positive than ERPs to reports in either the Conflicting Interests condition or the Penalty for Lying condition (see Table 1 and Figure 4). Our analysis of data recorded from midline sites reveals that the mean amplitude of ERPs in the Common Interests condition is 3.29 microvolts, which is significantly more positive than 2.27 microvolts in the Penalty for Lying condition [F(1, 11) = 5.32, p < 0.05] and more positive than 1.62 microvolts in the Conflicting Interests condition [F(1,11) = 15.43, p < 0.01]. The amplitude difference between the Penalty for Lying and the Conflicting Interests conditions is not significant [F(1, 11) = 3.35, p = 0.09]. We find a similar pattern of results when we assess the LPC (a component that is related to the P3). Specifically, we analyze the LPC by measuring the mean amplitude of ERPs from 600 to 900 milliseconds after the onset of the reporter’s report. Again, the mean amplitude of ERPs in this interval is significantly more positive in the Common Interests condition, relative to the Conflicting Interests and the Penalty for Lying conditions, at both the midline and lateral sites (see Table 1 and Figure 4). Our comparisons of mean amplitude measurements at midline sites reveal that ERPs in the Common Interests condition measure 3.18 microvolts, which is significantly more positive than 1.92 microvolts in the Conflicting Interests condition [F(1, 11) =

28 11.28, p < 0.01] and 1.73 microvolts in the Penalty for Lying condition [F(1, 11) = 6.11, p < 0.05]. The amplitude of the LPC to reports in the Conflicting Interests and the Penalty for Lying conditions do not statistically differ [midline: F(1, 11) = 0.21, p = 0.65]. Thus, subjects exhibit significantly larger P3 and LPC responses when they are exposed to a reporter who shares common interests with them, relative to when they are exposed to a reporter who is subject to a penalty for lying or who has conflicting interests with them. Interestingly (and unexpectedly), the size of the P3 and LPC responses is more similar in the Penalty for Lying and Conflicting Interests conditions than in the Penalty for Lying and Common Interests conditions. Taken together, these results again indicate that subjects’ brains differentially process information in the Common Interests condition, relative to the other two conditions. More broadly, these results may indicate that subjects’ brains treat reports as more informative in the Common Interests condition (Donchin and Coles 1988).

Medial Negativity We examine the medial negativity by measuring the mean amplitude of ERPs from 550 to 650 milliseconds after the onset of the reporter’s report. Our analysis suggests that the mean amplitude of the ERPs is more negative in the Conflicting Interests condition than in either the Common Interests or the Penalty for Lying conditions (see Table 1 and Figure 4). Specifically, the mean amplitude of ERPs at the midline sites in the Conflicting Interests condition is 0.21 microvolts, which is significantly more negative than 2.05 microvolts in the Common Interests condition [F(1, 11) = 26.08, p < 0.01] and 1.44 microvolts in the Penalty for Lying condition [F(1, 11) = 7.92, p < 0.05]. Thus, for this component, reports in the Conflicting Interests condition elicit more negative ERPs than do reports in the other two conditions. This is the only

29 ERP component that shows a similar response in the Common Interests and the Penalty for Lying conditions.

Conclusion In our experiments, we analyzed subjects’ behavior and brain activity in response to information from reporters whose trustworthiness stemmed from either the reporter’s interests (vis-à-vis the subjects) or from an institution, such as a penalty for lying. We did so by recording the decisions, reaction times, and EEGs of subjects who guessed the outcome of an unseen coin toss after they received information from an anonymous reporter who knew the outcome of the coin toss, but was under no obligation to communicate it truthfully. Based upon Lupia and McCubbins’s (1998) theory and experiments, we predicted that subjects would be equally likely to base their decisions upon the statements of a reporter who was trustworthy by virtue of sharing common interests with them and a reporter whose interests conflicted with their own, but who was made trustworthy by a penalty for lying. Because Lupia and McCubbins’s (1998) theory does not make predictions about subjects’ reaction times and brain activity when receiving information from reporters in the Common Interests, Conflicting Interests, and Penalty for Lying conditions, we asked whether these two other measures would also yield results that are consistent with their conclusion that institutions can substitute for common interests. Our results indicate that although subjects behave as if reporters in the Common Interests and Penalty for Lying conditions are equally trustworthy, their brain activity suggests that they process information differently in the Common Interests and Penalty for Lying conditions. As shown in Figure 3, subjects in both the Common Interests and Penalty for Lying conditions almost always base their predictions on the reporter’s report, while subjects apparently ignore the

30 reporter’s reports in the Conflicting Interests condition. Further, subjects’ reaction times are similar in the Common Interests and Penalty for Lying conditions and are significantly faster than subjects’ reaction times in the Conflicting Interests condition. Based on these behavioral responses, it appears that subjects are equally likely to trust a reporter who shares common interests with them and a reporter who is made trustworthy by an institution, namely a penalty for lying. In contrast, subjects’ brain activity in response to the reporter’s reports in the Common Interests condition tends to differ significantly from both the Conflicting Interests condition (as expected) and from the Penalty for Lying condition (contrary to our null hypothesis). Thus, even though the reporter is, theoretically and behaviorally, equally trustworthy in the Common Interests and Penalty for Lying conditions, subjects process information quite differently when it comes from a reporter who is trustworthy by virtue of sharing common interests with them versus a reporter who is made trustworthy by an external institution. In this way, our results suggest that even though institutions substitute for common interests in a behavioral sense, they do not necessarily induce the same cognitive processing of information. As for the implications of our results, they indicate, at a minimum, that political scientists who seek to understand trust and other political phenomena may not necessarily get the whole story if they only observe subjects’ decisions and reaction times. Specifically, in our experiments, subjects process information differently when it from comes reporters who are trustworthy for different reasons, and this processing difference exists even though subjects are equally likely to base their decisions upon these reporters’ reports and even though they take the same amount of time to make their decisions with both reporters. Given this difference between subjects’ behavior and brain activity, it is clear that recording subjects’ brain activity adds a new

31 dimension to our understanding of subjects’ decisions to trust the statements of others. It also potentially adds a new dimension to our understanding of other political phenomena (such as affect, online processing, “hot cognition,” and cognition in strategic settings), as other political scientists’ studies make clear (see, e.g., Morris, Squires, Taber, and Lodge 2003; Wilson, Stevenson, and Potts 2006). More broadly, our results may have implications for research on persuasion. Specifically, our results suggest that the manner in which a source is made trustworthy (and not just trustworthiness itself) affects how citizens process information from that source. Thus, politicians, endorsers, attorneys, and other actors who seek to persuade citizens should not necessarily assume that all perceptions of trustworthiness are created equal. Specifically, if the broader interpretations of our EEG results are correct (i.e., that subjects’ brains treat reports as more informative in the Common Interests condition, relative to the Penalty for Lying condition), then political actors who seek to persuade citizens may benefit from conveying that they share common interests with citizens, as opposed to emphasizing their trustworthiness by appealing to institutional constraints. Of course, the question of whether and when the cognitive differences that we observe lead to changes in citizens’ behavior is an empirical question that should be explored in future research. Finally, we emphasize an important methodological conclusion: namely, that EEG technology has much to offer political scientists who seek to understand political and social cognition. First, because electricity travels at nearly the speed of light, the voltages that scalp electrodes record reflect the brain’s activity at the same point in time; thus, EEG has excellent temporal resolution (approximately 1 millisecond) and provides a continuous measure of the online cognitive processing of information (Luck 2005). Given the many behavioral studies of

32 the online processing model in political science, it is clear that the direct measure of actual online processing that EEG provides would be beneficial to many political scientists. Indeed, in their study of the “hot cognition” hypothesis that underlies the online processing model, Morris, Squires, Taber, and Lodge (2003) take advantage of EEG technology to test this hypothesis, arguing that EEG allows for a better understanding of sensory and cognitive processing, as well as the activation of implicit attitudes. We could not agree more. Second, EEG directly reflects the activity of neurons that are involved in the processing of information; therefore, EEG provides a direct measure of brain activity, in contrast to other neuroimaging techniques, such as fMRI, that provide more indirect measures that are based on blood oxygenation levels or blood flow (Luck 2005). Further, unlike other neuroimaging techniques, EEG is much less expensive (the supplies needed to test each subject cost between 1 and 3 dollars) and much less invasive (i.e., subjects simply wear a cap atop their heads that contains small electrodes). Thus, EEG provides political scientists with a unique, practical way of simultaneously observing decision making and the cognitive processing of information. Further, given the differences that we observe between subjects’ behavior and brain activity in our study, it appears that recording subjects’ brain activity via EEG can potentially add a new dimension to our understanding of trust and other political phenomena—a dimension that we cannot necessarily tap if we only record behavioral responses.

33 References Abramson, P. R. and Finifter, A. W. (1981). On the meaning of political trust: New evidence from items introduced in 1978. American Journal of Political Science, 25, 297-307. Albertson, B. and Brehm, J. (2003). Comments. Political Psychology, 24, 765-768. Alford, J. R., Funk, C. L., and Hibbing, J. R. (2005). Are political orientations genetically transmitted? American Political Science Review, 99, 153-167. Alford, J. R., and Hibbing, J. R. (2006). The neural basis of representative democracy, Working paper, University of Nebraska, Lincoln. Atkeson, L. R., McCann, J. A., Rapoport, R. B., and Stone, W. J. (1996). Citizens for Perot: Assessing patterns of alienation and activism. In Broken Contract: Changing Relationships Between Americans and Their Government, ed. SC Craig. Boulder: Westview Press. Bledowski, C., Prvulovic, D., Hoechstetter, K., Scherg, M., Wibral, M., Goebel, R., and Linden, D. E. J. (2004). Localizing P300 generators in visual target and distractor processing: A combined event-related potential and functional magnetic resonance imaging study. Journal of Neuroscience, 24, 9353-9360. Boudreau, C. (2006). Jurors are competent cue-takers: How institutions substitute for legal sophistication. International Journal of Law in Context, 2(3), 293-304. Boudreau, C., McCubbins M. D., and Coulson, S. (2008). Knowing when to trust others: An ERP study of decision making after receiving information from unknown people. Working paper, University of California, Davis. Cacioppo, J. T., Berntson, G. G., Lorig, T. S., Norris, C. J., Rickett, E., and Nusbaum, H. (2003). Just because you’re imaging the brain doesn’t mean you can stop using your head: A

34 primer and set of first principles. Journal of Personality and Social Psychology, 85, 650661. Cacioppo, J. T. and Visser, P. S. (2003). Political psychology and social neuroscience: Strange bedfellows or comrades in arms? Political Psychology, 24, 647-656. Capella, J. N. and Jamieson, K. H. (1997). Spiral of Cynicism: The Press and the Public Good. New York: Oxford University Press. Cesarini, D., Dawes, C. T., Fowler, J. H., Johannesson, M., Lichtenstein, P. and Wallace, B. (2008). Heritability of cooperative behavior in the trust game. Proceedings of the National Academy of Sciences, 105, 3721-3726. Citrin, J. (1974). Comment: The political relevance of trust in government. American Political Science Review, 68, 973-988. Citrin, J. and Green, D. P. (1986). Presidential leadership and the resurgence of trust in government. British Journal of Political Science, 16, 431-453. Crawford, V. & Sobel, J. (1982). Strategic information transmission. Econometrica, 50, 14311451. De Quervain, D. J. F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., and Fehr, E. (2004). The neural basis of altruistic punishment. Science 305, 1254-1258. Delgado, M. R., Frank, R. H., & Phelps, E. A. (2005). Perceptions of moral character modulate the neural systems of reward during the trust game. Nature Neuroscience, 8, 1611-1618. Dickson, E. S., Scheve, K. and Stanley, D. (2008). Testing the effect of social identity appeals in election campaigns: An fMRI study. Working paper, New York University.

35 Dien, J., Spencer, K. M., and Donchin, E. (2004). Parsing the late positive complex: Mental chronometry and the ERP components that inhabit the neighborhood of the P300. Psychophysiology, 41, 665-678. Donchin, E. and Coles, M. G. H. (1988). Is the P300 component a manifestation of context updating? The Behavioral and Brain Sciences, 11: 357-374. Druckman, J. N. (2001a). On the limits of framing effects: Who can frame? Journal of Politics, 63, 1041-1066. Druckman, J. N. (2001b). Using credible advice to overcome framing effects. Journal of Law, Economics, and Organization, 17, 62-82. Druckman, J. N. (2001c). The implications of framing effects for citizen competence. Political Behavior, 23, 225-256. Easton, D. (1965). A Systems Analysis of Political Life. New York: Wiley. Feldman, S. (1983). The measurement and meaning of political trust. Political Methodology, 9, 341-354. Fowler, J. H. and Dawes, C. T. (2008). Two genes predict voter turnout. Journal of Politics, 70. Gamson, W. A. (1968). Power and Discontent. Homewood: Dorsey. Gonsalvez, C. J., Barry, R. J., Rushby, J. A., and Polich, J. (2007). Target-to-target interval, intensity, and P300 from an auditory single-stimulus task. Psychophysiology, 44, 245250. Grady, C. L., Haxby, J. V., Horwitz, B., Schapiro, M. B., Rapoport, S. I., Ungerleider, L. G., Mishkin, M., Carson, R. E., and Herscovitch, P. (1992). Dissociation of object and spatial vision in human extrastriate cortex: Age-related changes in activation of regional cerebral

36 blood flow measured with [15O] water and positron emission tomography. Journal of Cognitive Neuroscience, 4, 23-34. Hetherington, M. J. (1998). The political relevance of political trust. American Political Science Review, 92, 791-808. Hetherington, M. J. (1999). The effect of political trust on the presidential vote, 1968-96. American Political Science Review, 93, 311-326. Hibbing, J. R. and Theiss-Morse, E. (1995). Congress as Public Enemy: Public Attitudes Toward American Political Institutions. New York: Cambridge University Press. Johnson, D. D. P, McDermott, R., Barrett, E. S., Cowden, J., Wrangham, R., McIntyre, M. H., and Rosen, S. P. (2006). Overconfidence in wargames: Experimental evidence on expectations, aggression, gender, and testosterone. Proceedings of the Royal Society, 273, 2513-2520. King, G., Keohane, R. O., and Verba, S. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press. King-Casas B., Tomlin D., Anen C., Camerer C. F., Quartz S. R., and Montague P. R. (2005) Getting to know you: Reputation and trust in a two-person economic exchange. Science 308, 78–83. Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., and Fehr, E. (2005). Oxytocin increases trust in humans. Nature 435, 673-676. Kranczioch, C, Debner, S., & Engel, A. (2003). Event-related potential of the attentional blink phenomenon. Cognitive Brain Research 17, 177-187.

37 Laitin, D. D. (2002). Comparative politics: The state of the subdiscipline. In I. Katznelson and H. V. Milner (Eds.), Political Science: The State of the Discipline (pp. 630-659). New York: Norton. Laitin, D. D. (2003). The perestroikan challenge to social science. Politics and Society, 31, 163184. Levi, M. (1988). Of Rule and Revenue. Berkeley: University of California Press. Levi, M. and Stoker, L. (2000). Political trust and trustworthiness. Annual Review of Political Science, 3, 475-507. Lieberman, M. D., Schreiber, D., and Ochsner, K. N. 2003. Is political cognition like riding a bicycle? How cognitive neuroscience can inform research on political thinking. Political Psychology, 24, 681-704. Luck, S. J. (1995). Multiple mechanisms of visual-spatial attention: Recent evidence from human electrophysiology. Behavioral Brain Research 71, 113-123. Luck, S. J. (2005). An Introduction to the Event-Related Potential Technique. Cambridge: MIT Press. Lupia, A. (1992). Busy voters, agenda control, and the power of information. American Political Science Review, 86, 390-404. Lupia, A. (1994). Shortcuts versus encyclopedias: Information and voting behavior in California insurance reform elections. American Political Science Review, 88, 63-76. Lupia, A. & McCubbins, M. D. (1998). The Democratic Dilemma: Can Citizens Learn What They Need to Know? Cambridge: Cambridge University Press.

38 McCabe, K., Houser, D., Ryan, L., Smith, V., & Trouard, T. (2001). A functional imaging study of cooperation in two-person reciprocal exchange. Proceedings of the National Academy of Sciences, 98, 11832-11835. McDermott, R. (2002). Experimental methods in political science. Annual Review of Political Science, 5, 31-61. Miller, A. H. (1974). Political issues and trust in government: 1964-1970. American Political Science Review, 68, 951-972. Miller, A. H. and Borrelli, S. (1991). Confidence in government during the 1980s. American Politics Quarterly, 19, 147-173. Miller, A. H., Goldenberg, E. N., and Erbring, L. (1979). Type-set politics: Impact of newspapers on public confidence. American Political Science Review, 73, 67-84. Mondak, J. J. (1993). Source cues and policy approval: The cognitive dynamics of public support for the Reagan agenda. American Journal of Political Science, 37, 186-212. Morris, J. P., Squires, N. K., Taber, C. S., and Lodge, M. (2003). Activation of political attitudes: A psychophysiological examination of the hot cognition hypothesis. Political Psychology, 24, 727-745. Mutz, D. C. (2007). Effects of “in-your-face” television discourse on perceptions of a legitimate opposition. American Political Science Review, 101, 621-635. Nuwer, M., Comi, G., Emerson, R., Fuglsang-Frederiksen, A., Guerit, J.-M., Hinrichs, H., and Rappelsburger, P. (1998). IFCN standards for digital recording of clinical EEG. Electroencephalography and Clinical Neurophysiology, 106, 259-261. Patterson, T. E. (1993). Out of Order. New York: Knopf.

39 Potts, G.F. (2004). An ERP index of task relevance evaluation of visual stimuli. Brain & Cognition 56: 5-13. Potts, G.F., Patel, S.H., & Azzam, P.N. (2004). Impact of instructed relevance on the visual ERP. International Journal of Psychophysiology 52, 197-209. Potts, G.F., Martin, L.E., Burton, P. & Montague, P.R. (2006). When things are better or worse than expected: The medial frontal cortex and the allocation of processing resources. Journal of Cognitive Neuroscience 18: 1112-1119. Reuter-Lorenz, P.A., Jonides, J., Smith, E. E., Hartley, A., Miller, A., Marshuetz, C., and Koeppe, R. A. (2000). Age differences in the frontal lateralization of verbal and spatial working memory revealed by PET. Journal of Cognitive Neuroscience, 12, 174-187. Rosenstone, S. J., Behr, R. L., and Lazarus, E. H. (1984). Third Parties in America: Citizen Response to Major Party Failure. Princeton: Princeton University Press. Scholz, J. T. and Lubell, M. (1998a). Adaptive political attitudes: Duty, trust, and fear as monitors of tax policy. American Journal of Political Science, 42, 903-920. Scholz, J. T. and Lubell, M. (1998b). Trust and taxpaying: Testing the heuristic approach to collective action. American Journal of Political Science, 42, 398-417. Sigelman, L., Sigelman, C. K., and Walkosz, B. J. (1992). The public and the paradox of leadership: An experimental analysis. American Journal of Political Science, 36, 366385. Sniderman, P. M., Brody, R. A., and Tetlock, P. E. (1991). Reasoning and Choice: Explorations in Political Psychology. New York: Cambridge University Press.

40 Squires, K. C., Wickens, C., Squires, N. K., and Donchin, E. (1976). The effects of stimulus sequence on the waveform of the cortical event-related potential. Science, 193, 1142-1146. Theiss-Morse, E., Fried, A., Sullivan, J. L., and Dietz, M. (1991). Mixing methods: A multistage strategy for studying patriotism and citizen participation. Political Analysis, 3, 89-121. Vogel, E.K. and Luck, S.J. (2000). The visual N1 component as an index of a discrimination process. Psychophysiology, 37, 190-203. Wahlke, J. C. and Lodge, M. G. (1972). Psychophysiological measures of political attitudes and behavior. Midwest Journal of Political Science, 16, 505-537. Wilson, R. K., Stevenson, R. and Potts, G. (2006). Brain activity in the play of dominant strategy and mixed strategy games. Political Psychology, 27, 459-478. Zak, P. J., Kurzban, R., & Matzner, W. T. (2004). The neurobiology of trust. Ann. NY Acad. Sci. 1032, 224-227. Zak, P. J., Kurzban, R., and Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. Hormones and Behavior, 48, 522-527.