Perceiver Sensitivity to Enjoyment and Non-Enjoyment ... - Springer Link

3 downloads 7245 Views 269KB Size Report
Sep 25, 2007 - that reliably specifies the meaning of a smile. ..... situated approximately 90 cm from the participant, using custom-written software, which.
J Nonverbal Behav (2007) 31:259–275 DOI 10.1007/s10919-007-0036-4 ORIGINAL PAPER

Detecting Happiness: Perceiver Sensitivity to Enjoyment and Non-Enjoyment Smiles Lynden Miles Æ Lucy Johnston

Published online: 25 September 2007  Springer Science+Business Media, LLC 2007

Abstract The physiognomic distinctions between spontaneous enjoyment smiles and deliberate non-enjoyment smiles provide the social perceiver with a functional, accessible source of information to help regulate social interaction. Two experiments were performed to investigate whether perceivers were sensitive to this information in a contextually meaningful manner. In Experiment 1, participants were asked to judge whether a target individual was happy or not. The results revealed that participants were indeed sensitive to the differences between enjoyment and non-enjoyment smiles. In Experiment 2, participants performed a priming task without any specific instruction to judge emotional state. Neutral expressions, non-enjoyment smiles and enjoyment smiles were employed as primes in a word valence identification task. The results demonstrated a clear trend indicative of perceiver sensitivity. When compared to a the baseline condition of a neutral expression prime, enjoyment but not non-enjoyment smiles facilitated identification of positive words. Keywords Facial expression  Happiness  Deliberate non-enjoyment smile  Spontaneous enjoyment smile  Social perception

Introduction The smile is a ubiquitous human facial expression universally recognized as an indication of a positive emotional experience (Ekman 1972; Ekman and Friesen 1971; Ekman et al. 1969, 1987; Elfenbein and Ambady 2002; Izard 1994). Characterized by upturned corners of the mouth resulting from contraction of the zygomatic major muscles, the smile is one of the most common facial displays (Abel 2002). In social settings, however, smiles often serve diverse communicatory functions rather than simply expressing happiness. Smiles L. Miles (&) School of Psychology, University of Aberdeen, Aberdeen AB24 2UB, Scotland e-mail: [email protected] L. Johnston Department of Psychology, University of Canterbury, Private Bag 4800, Christchurch, New Zealand

123

260

J Nonverbal Behav (2007) 31:259–275

are frequently used as intentional communicative mechanisms in order, for example, to coordinate conversation (Ekman 2001), to conceal the expression of other emotions (Ekman and Friesen 1982; Ekman et al. 1988), to reduce conflict or tension (Ikuta 1999), to manipulate or deceive (Keating and Heltman 1994), or to appease others (Hecht and LaFrance 1998). Ekman (2001) identified 18 smile types all with a specific social meaning in addition to the spontaneous expression of happiness, and went on to suggest that there may be around 50 types of smile in total. A potential challenge for the social perceiver, therefore, is to be able to accurately discriminate between the various forms of smiling in order to know the contextual meaning of a given smile. A failure to do so may risk misperceiving what a given social interaction affords. In this sense, the broad distinction between smiles that occur in the context of a positive emotional experience (enjoyment smiles) and those that do not (non-enjoyment smiles) is potentially salient to the perceiver. Many researchers have asserted that accurate knowledge of the emotional states of others has an important functional role in the coordination and regulation of effective social interaction (e.g., Adelmann and Zajonc 1989; Ekman 2003; Fredrickson 1998; Hess et al. 1995; Keltner and Haidt 1999; Shiota et al. 2004). Mistaking a polite greeting smile expressed by a stranger for a true indicator of happiness may disrupt the proceeding interaction. While the former, a non-enjoyment smile, is certainly an important, culturally specific component of the social norms and display rules that help regulate day-to-day interaction (Ekman 2003), it has been argued that the latter, an enjoyment smile, specifies a distinct set of social affordances relevant to the emotional state of the smiling individual (e.g., an opportunity for effective cooperation, Owren and Bachorowski 2001). The consequences of misperception may be even more serious if a smile intended to mask another emotional state, for instance anger, is mistaken for an expression of happiness. It follows, therefore, that in order to maximize the functionality conferred by accurate social perception, the perceiver needs to be sensitive to the difference between smiles that are a component of a positive emotional experience, and those that serve other communicative functions.1 We report two experiments that investigated the ability of perceivers to distinguish enjoyment from non-enjoyment smiles.

Differences in Smile Physiognomy For perceivers to be able to be sensitive to smile type there must be information available that reliably specifies the meaning of a smile. A growing body of research suggests there is both static and dynamic facial information that has utility for the judgment of smile type. With respect to static information (i.e., information available in a photograph), Duchenne (1862/1990) and more recently Ekman et al. (1988) have reported physiognomic distinctions between enjoyment and non-enjoyment smiles. In addition to the action of zygomatic major that pulls the lip corners up and generically marks a smile, enjoyment smiles also involve recruitment of the eye-sphincter muscles orbicularis oculi, specifically the pars lateralis aspect (also termed the Duchenne marker). When contracted, this muscle causes wrinkles (or crow’s feet) at the outer corners of the eyes, raising of the cheeks, bagging or bulging of skin below the eye, lowering of the eyebrows and a narrowing of the eye aperture as a result of the eyelids being pulled together (Frank 2002). Smiles that feature 1

Of course, it is also functional for the social perceiver to be able to determine the contextual meaning of any smile; however the focus of the present research was on the global distinction between enjoyment and non-enjoyment smiles.

123

J Nonverbal Behav (2007) 31:259–275

261

contraction of orbicularis oculi have been associated with increases in self-reported levels of enjoyment (Ekman et al. 1990), higher ratings of positive mood by others (Scherer and Ceschi 2000), and patterns of neural activity consistent with those shown when in a positive emotional state (Davidson et al. 1990; Ekman et al. 1990; Fox and Davidson 1988). Also relevant to static displays, Ekman et al. (1981) reported that both adult’s and children’s smiles in response to a joke (enjoyment smiles) were less asymmetrical than when instructed to smile by the experimenter (non-enjoyment smiles). This finding was later supported by meta-analysis (Skinner and Mullen 1991). Beyond static factors, differences in the dynamical properties of spontaneous and deliberate facial actions may provide the attuned perceiver with information relevant to the meaning of a smile. Hess and Kleck (1990) reported that, consistent with the distinction between voluntary and automatic movement (see Rinn 1984, for a review), deliberate smiles featured more irregular actions, that is more phases, pauses, and stepwise intensity changes than spontaneous smiles. Schmidt et al. (2003) demonstrated that spontaneous smiles exhibit very stable dynamic qualities, again typical of automatic movement. Similarly, Schmidt et al. (2006) demonstrated that the temporal aspects of zygomatic major action vary between deliberate and spontaneous smiles, with the former exhibiting greater onset and offset speed, amplitude, and offset duration. Taken as a whole, there appears to be a catalogue of factors that differ relative to smile type. These factors span both the conceptual distinction between non-enjoyment and enjoyment smiles (e.g., orbicularis oculi contraction) as well as the ontological distinction between deliberate and spontaneous movement (e.g., dynamical properties of facial movement) and provide a potential informational basis for a perceiver to know the meaning of a smile. Although it must be acknowledged that there is some inconsistency within the available empirical reports (e.g., recently Schmidt et al. (2006) described instances of the contraction of orbicularis oculi during episodes of deliberate smiling), the evidence as it stands indicates a strong but imperfect relationship between these factors and smile type across a number of levels of measurement (e.g., neurological, self-report, perceptual) suggesting there is at least heuristic value for a perceiver who attends to this information (Soussignan 2002). It is worthwhile, therefore, to investigate the use that perceivers make of the factors that relate to smile type (e.g., contraction of orbicularis oculi and the characteristics of spontaneous facial actions) in order to further elucidate the role that smile physiognomy plays in the coordination of effective social interaction.

Sensitivity to Differences in Smile Physiognomy Beginning with Darwin (1872/1998), several researchers have presented evidence in support of perceiver sensitivity to the differences between enjoyment and non-enjoyment smiles (see Frank 2002, for an overview). In a seminal paper Frank et al. (1993) reported that participants making explicit judgments of smile type (specifically ‘‘enjoyment’’ or ‘‘social’’ smiles) from video-clips were significantly more accurate than chance. Furthermore, these researchers also demonstrated that individuals expressing smiles that featured contraction of orbicularis oculi were rated more positively across a number of personality dimensions than when they were expressing smiles without this feature. Similar findings were also reported by Scherer and Ceschi (2000) in a more naturalistic context, and by Peace et al. (2006) when participants were asked to evaluate clothing worn by a model displaying enjoyment and non-enjoyment smiles. Krumhuber and Kappas (2005) as well as Chartrand and Gosselin (2005; also see Gosselin et al. 2002; Gosselin et al. 2002) have

123

262

J Nonverbal Behav (2007) 31:259–275

reported evidence that when evaluating smile ‘‘genuineness’’ or ‘‘authenticity’’ perceivers make use of the characteristics that differentiate deliberate and spontaneous expressions (e.g., temporal qualities, asymmetry, orbicularis oculi contraction). Hess et al. (1989) similarly reported that perceivers used temporal factors to distinguish enjoyment from nonenjoyment smiles when asked to rate the happiness of a smiling individual. The research reviewed above indicates that the characteristics that have been differentially associated with enjoyment and non-enjoyment smiles respectively (i.e., orbicularis oculi contraction and the kinematic patterns of deliberate and spontaneous facial actions) are also used by perceivers when attempting to judge the contextual meaning of a smile. The present research replicates and extends this work by employing ecologically valid facial displays and introducing innovative measures of the ability of perceivers to discriminate spontaneous enjoyment smiles from deliberate non-enjoyment smiles.

Experiment 1 Our first experiment partially replicates Frank et al. (1993), with adaptations to the experimental design. Participants in this experiment viewed either photographs or videoclips of target individuals exhibiting neutral expressions, deliberately posed non-enjoyment smiles, and spontaneous enjoyment smiles and judged whether each target individual was happy or not happy. The requirement to judge emotional state directly as opposed to identifying smile type or smile veracity (e.g., deliberate vs. spontaneous, posed vs. genuine, fake vs. authentic) was adopted since, consistent with the conceptual distinction between enjoyment and nonenjoyment smiles, emotional state was the phenomenon of interest. Requiring participants to judge emotional state directly also eliminates the requirement for a ‘‘none of the above’’ option (see Frank and Stennett 2001) and helps distance, conceptually, judgments of smile type from potentially morally-relevant judgments of deception. The inclusion of both static and dynamic facial displays provides a means to compare sensitivity to the meaningful differences between enjoyment and non-enjoyment smiles across presentation modalities. Given the additional kinematic information available in dynamic compared with static facial displays, it is suggested that, consistent with previous reports (e.g., Frijda 1953; Harwood et al. 1999), perceivers will exhibit greater sensitivity to the distinctions between smile type when viewing videos rather than photographs of the target expressions. Participants were required to make two sets of judgments about the targets. One set of judgments concerned the emotion being shown by the participant, and one set concerned judgments of the emotion being felt. These instructions were intended to place an emphasis on either the identification of facial expressions that are stereotypically associated with happiness (i.e., the show condition) or the detection of the presence of a positive emotional state in the target individual (i.e., the feel condition). Therefore we expected that when judging the emotion shown, participants would be more likely to classify a non-enjoyment smile as displaying happiness, than when judging the emotion felt, reflecting a criterion shift between judging any smile as ‘‘happy’’ compared to only those that accompany a positive emotional experience. Consistent with the findings of Frank et al. (1993), and with a functional explanation of social perception, it was predicted that perceivers would exhibit sensitivity to the differences in emotional state specified by enjoyment and non-enjoyment smiles. Specifically, we hypothesized that, overall, perceivers would accurately classify enjoyment smiles as

123

J Nonverbal Behav (2007) 31:259–275

263

reflecting happiness and neutral expressions and non-enjoyment smiles as reflecting an absence of happiness.

Method Facial Displays A set of facial displays was generated specifically for use in this research. As discussed above, facial expressions can be classified according to the relative spontaneity of the exhibition of the expression (i.e., deliberate vs. spontaneous expressions) as well as the presence (or absence) or an underlying emotional state. We operationalized these distinctions by creating expressions that represented, in the context of social interaction, ecologically meaningful facial displays that were conceptually distinct. Specifically, we sought to create deliberately posed non-enjoyment smiles unrelated to a positive emotional experience, and spontaneous enjoyment smiles that occurred as part of a positive emotional experience. Twelve participants were recruited and invited to the laboratory individually. Prior to agreeing to participate they were informed that the procedure involved recording their faces using video, but no information was provided specific to facial expressions of emotion or to smiling. The procedure for generating the facial displays consisted of five phases, designed to elicit: (i) neutral expressions; (ii) deliberate non-enjoyment smiles; (iii) positive mood induction; (iv) spontaneous enjoyment smiles (from sounds); and (v) spontaneous enjoyment smiles (from pictures). All materials were presented to participants on a standard 17 inch color CRT computer monitor and video recordings were made using a Canon XM2 3CCD digital video camera mounted above the monitor. Each recording was standardized for brightness and contrast and compressed using an MPEG4v2 codec. No participants wore glasses or had any noticeable facial hair. During the first phase of the procedure participants were asked to relax and look into the camera with a neutral facial expression. The second phase required them to look into the camera and pose a series of smiles as they would for various everyday situations (e.g., having a passport photograph taken, having a family portrait taken). The third phase was intended to induce a positive mood. A recording of classical music previously shown to reliably induce a positive emotional state (Halberstadt and Niedenthal 1997) was played to participants. The fourth and fifth phases were designed to generate spontaneous enjoyment smiles in response to positively valenced sound clips and photographs respectively. Eleven sounds (see Appendix) were selected from the International Affective Digitized Sounds (IADS) database (Bradley and Lang 1999a) on the basis high normative ratings of valence ([7.5 on a 9 point scale where 9 = positive) and adequate ratings of arousal ([5 on a 9point scale where 9 = very arousing). Each sound was played individually and participants were asked to concentrate on the sound and try to imagine a situation in which it would occur. Twenty images (see Appendix) were selected from the International Affective Picture System (IAPS) database (Lang et al. 2001) using the same criteria as for the IADS sound clips. Each photograph was displayed for 15 s and participants were asked to look at it and think how it made them feel. Prior to the beginning of each phase and at the end of the procedure participants were asked to indicate their present mood on an analogue mood scale consisting of a 200 mm vertical line anchored at the top (very positive), and bottom (very negative). The mid-point of the scale was labelled ‘‘neutral’’. Mood was scored by

123

264

J Nonverbal Behav (2007) 31:259–275

measuring the distance from the mid-point to the line made by the participant. Thus, mood scores could range from –100 (very negative) to 100 (very positive). The video-recording for each participant was coded for evidence of zygomatic major and/or orbicularis oculi contraction according to the Facial Action Coding System (FACS) (Ekman et al. 2002) criteria for Action Unit (AU) 12 (zygomatic major contraction) and AU6 (orbicularis oculi contraction) respectively. The intensity of each expression was also classified using FACS criteria and any other features related to the expression, in particular any other visible muscular contractions, were noted. All expressions were independently coded by a second coder. After discussion, an agreement rate of 100% was obtained. For an expression to be categorized as a spontaneous enjoyment smile, 3 criteria needed to be met: the expression must have been exhibited during phases 4 or 5 of the procedure; the participant must have reported an increase in positive mood between the beginning of the mood induction phase and the end of the procedure; and there needed to be evidence, according to FACS criteria, for contraction of both zygomatic major and orbicularis oculi. Two participants failed to meet the positive mood criterion, and two other participants did not exhibit any smiles that included evidence of orbicularis oculi contraction. Alternatively, smiles were classified as deliberate non-enjoyment smiles if there was evidence of zygomatic major contraction, but no discernible contraction of orbicularis oculi, and they occurred during phase 2 of the procedure. Finally, neutral expressions were considered to be those without any noticeable facial muscle activity occurring during the first phase of the procedure. In total, spontaneous enjoyment smiles were obtained from 9 participants, while deliberate non-enjoyment smiles and neutral expressions were obtained from all 12 participants. Three facial expressions (a neutral expression and two smiles) were selected from each of the 12 individuals who participated in the facial display generation procedure. For each of the 9 individuals from whom both enjoyment and non-enjoyment smiles were available, these expressions were matched in intensity according to FACS criteria. For the additional three participants, two non-enjoyment smiles, also matched for intensity, were chosen. In total 36 facial displays were included in the present study: 12 neutral expressions, 15 deliberate non-enjoyment smiles, and 9 spontaneous enjoyment smiles. Static photographs were obtained from each video-clip at the apex of each expression. An example of each expression from one participant is shown in Fig. 1.

Fig. 1 Examples of a neutral expression, deliberate non-enjoyment smile, and spontaneous enjoyment smile generated for the present research

123

J Nonverbal Behav (2007) 31:259–275

265

Participants Thirty-seven female students from the University of Canterbury volunteered to participate in return for a $2 lottery ticket. Seventeen participants were assigned to the static presentation condition, while the remaining 20 were assigned to the dynamic presentation condition. Design2 A 2 (presentation condition: static/dynamic) · 3 (facial expression: neutral/non-enjoyment smile/enjoyment smile) · 2 (judgment type: show/feel) mixed-model design was employed for Experiment 1. Presentation condition was a between-participants factor, while the other 2 factors were within-participants. Order of facial expression presentation was randomized for each participant while order of judgment type was counterbalanced. The dependent variable was the categorical emotion judgment: happy or not happy. Procedure Participants were invited to take part in research investigating impression formation and were each tested individually. They were told that they would be seeing a series of photographs, or video-clips, of different individuals and their task was to judge whether each person was happy or not. Participants were informed that they would be making judgments twice, once judging the emotion shown, and once the emotion felt by the target. All instructions were presented verbally by the experimenter, and repeated on the computer screen. Facial displays were presented on a standard 17 inch color CRT computer monitor situated approximately 90 cm from the participant, using custom-written software, which also recorded participants’ responses (Walton 2003). Judgment decisions were indicated using designated keys on a standard computer keyboard corresponding to ‘‘happy’’ and ‘‘not happy’’. The procedure began with a series of practice trials using facial displays not included in the actual trials. Once the practice trials had been successfully completed, the experimenter left the room and the participant completed the first judgment condition (i.e., show or feel). The experimenter then returned and reminded the participant about the changes to the judgment condition (i.e., show or feel) and the procedure was repeated. Participants in the dynamic presentation condition were instructed to only respond once the entire video-clip had been played, while those in the static presentation condition were able to respond at any point after the photograph had appeared on the screen. The entire procedure took approximately 20 min after which the participants were debriefed, paid and thanked for their time. Results and Discussion Participant responses were collated by presentation condition, facial expression and judgment type, as shown in Table 1. 2

Sex differences with regard to either the target facial display (Experiments 1 and 2) or participants (Experiment 2) are not considered in the analyses of the present research as insufficient numbers were available to ensure adequate statistical power.

123

266

J Nonverbal Behav (2007) 31:259–275

Table 1 Percentage of participants categorizing facial displays as happy by judgment condition, presentation condition, and facial expression (Experiment 1) Presentation condition

Show (% happy)

Feel (% happy)

Total (% happy)

Static presentation Neutral expression

1

6

4

Non-enjoyment smile

89

55

72

Enjoyment smile

98

90

94

Dynamic presentation Neutral expression

4

7

6

Non-enjoyment smile

71

34

53

Enjoyment smile

78

72

75

As can be seen in Table 1, while neutral expressions were rarely identified as happy, both non-enjoyment and enjoyment smiles were frequently classified in this manner. Nonenjoyment smiles were frequently judged as reflecting happiness when judging the emotion shown, but less often when judging the emotion felt. However, the majority of enjoyment smiles were classified as reflecting happiness regardless of presentation or judgment condition. In order to confirm these observations a non-parametric signal detection analysis was performed (Green and Swets 1966; Macmillan and Creelman 1991; Snodgrass and Corwin 1988). Initially, hit and false alarm rates were calculated. A hit was defined as correctly identifying an enjoyment smile as happy, while a false alarm was defined as identifying either a neutral expression or a non-enjoyment smile as happy. The frequency of hits and false alarms were converted to the associated rates of hits and false alarms by applying a correction formula recommended by Snodgrass and Corwin. Mean hit and false alarm rates are displayed in Table 2 as a function of presentation and judgment conditions. Hit and false alarm rates were then used to calculate estimates of sensitivity and response bias separately for presentation and judgment condition for each participant, as shown in Table 2. Each sensitivity score was then compared to 0.5 (representing chance level responding and therefore no sensitivity), using single sample t-tests. Sensitivity was shown to be significantly greater than 0.5 (p \ 0.05) for all conditions, indicating that participants were able to reliably differentiate between expressions specifying happiness and those not specifying happiness. A 2 (presentation condition: static/dynamic) · 2

Table 2 Mean Hit (HIT) and False Alarm (FA) rates, and estimates of sensitivity (A0 ) and Response Bias (B@) by presentation condition, judgment condition, and smile type (Experiment 1) HIT

FA

A0

B@

‘‘Show’’ judgments

0.93

0.50

0.83#

–0.62*

‘‘Feel’’ judgments

0.86

0.34

0.85#

–0.31*

‘‘Show’’ judgments

0.88

0.48

0.80#

–0.47*

‘‘Feel’’ judgments

0.81

0.26

0.86#

–0.12

Presentation condition Static presentation

Dynamic presentation

0

Note: Mean estimates of sensitivity (A ) with a # are significantly different from 0.5 (p \ 0.05). Mean estimates of bias (B@) with a * are significantly different from 0 (p \ 0.05)

123

J Nonverbal Behav (2007) 31:259–275

267

(judgment condition: show/feel) mixed model ANOVA with repeated measures on the second factor was conducted on the sensitivity scores. There was only a significant main effect for judgment condition, F(1, 35) = 8.03, p \ 0.01, g2p = 0.19. Participants exhibited a greater degree of sensitivity to information specifying happiness when judging the emotion felt (MA0 = 0.85) compared to the emotion shown (MA0 = 0.82). Inspection of estimates of response bias (see Table 2) exhibited revealed a tendency for participants to be more likely to categorize any given expression as ‘‘happy’’ than ‘‘not happy’’. In the context of the current study, the presence and direction of this response bias is not unexpected in that the majority of facial displays were smiles of some description which are stereotypically associated with happiness. A 2 (presentation condition: static/ dynamic) · 2 (judgment condition: show/feel) mixed model ANOVA with repeated measures on the second factor was conducted on the estimates of response bias, which confirmed the predicted criterion shift between ‘‘show’’ and ‘‘feel’’ judgments. Judgments of the emotion shown (MB@ = –0.54) were accompanied by a greater degree of response bias than judgments of the emotion felt (MB@ = –0.21), F(1, 35) = 26.81, p \ 0.01, g2p = 0.43. Greater response bias was also exhibited when facial displays were presented statically (MB@ = –0.46) compared with dynamically (MB@ = –0.29), F(1, 35) = 7.24, p \ 0.05, g2p = 0.17. No interaction was revealed. Overall, the results of Experiment 1 provide support for the hypothesized effects and were consistent with the findings reported by Frank et al. (1993, study 2). Our results demonstrated that participants exhibited sensitivity to the differences between deliberate non-enjoyment smiles and spontaneous enjoyment smiles when making categorical judgments of happiness. The distinction between judging the emotion shown versus the emotion felt led to more conservative decision making in the latter condition. Although errors were observed for all judgment conditions, it is notable that when judging emotional state most directly (i.e., feel judgments) from arguably the most informative display (i.e., dynamic presentation) false alarms were low. While not perfect, participants exhibited a clear ability to associate spontaneous enjoyment smiles with experienced happiness. However, explicit instructions to attend to emotional state may not generalize well to actual social interaction. During real-world interactions, the social perceiver is not ordinarily in the practice of making overt, explicit judgments of emotional state, or any other dispositional qualities, instead relying on more ‘‘on-line’’ and spontaneous means to deal effectively with such information. Hence, by drawing attention specifically to emotional state as in this experiment, participants may have been led to attend to aspects of the target individual’s behavior and appearance in a different, perhaps more thorough, manner than they might when engaging in an actual interaction. Our second experiment assessed sensitivity to the information specifying positive emotional state in a manner that did not draw the participants’ attention to explicit judgments of facial expression.

Experiment 2 A small number of previous studies have examined aspects of emotion perception relevant to perceivers’ sensitivity to the distinction between deliberate and spontaneous smiles without explicitly requiring participants to attend directly to the emotional state of an interaction partner. In a follow-up to their study reported earlier, Frank et al. (1993, study 3) required participants to form impressions of individuals displaying enjoyment or nonenjoyment smiles, reasoning that this approach resembled an interaction situation. Overall, ratings of impressions of individuals expressing enjoyment smiles were more positive than

123

268

J Nonverbal Behav (2007) 31:259–275

those expressing deliberately posed smiles. Using a similar approach but in a more naturalistic setting, Scherer and Ceschi (2000) asked airline staff at a major international airport to judge the emotional state of passengers who had reported their luggage lost. The facial expressions of the passengers were also being surreptitiously videotaped. A comparison between passenger facial expression and staff judgments revealed a positive relationship between the incidence of smiles that included orbicularis oculi contraction and ratings of their perceived emotional state but no such relationship with regard to the incidence of smiles without orbicularis oculi contraction. Surakka and Hietanen (1998) reported that the intensity of viewers’ facial muscle contraction (measured using facial EMG recordings) in both the eye (orbicularis oculi) and cheek (zygomatic major) regions were significantly stronger (i.e. greater EMG activity) when viewing smiles that featured orbicularis oculi contraction compared to neutral expressions, but no such difference was found when comparing smiles without orbicularis oculi contraction and neutral expressions. Moreover, participants reported feeling more positive and empathic toward an individual expressing a smile with orbicularis oculi contraction. Finally, Williams et al. (2001) reported research that investigated the visuocognitive strategies underlying the perception of facial expressions. Compared to neutral and sad facial expressions, perceivers made proportionately more and longer eye-fixations to the outer corners of the eye when viewing smiling faces. The authors suggest this may reflect a perceptual strategy whereby when a smile is detected, attention is spontaneously directed toward facial information (i.e., the region of face where contraction of orbicularis oculi is most visible) that is salient to determining the contextual meaning of the smile. Thus, there is a small body of literature that supports the view that perceivers are sensitive to the meaning of a smile, even in the absence of any explicit judgment or decision process. In order to further examine this hypothesis, Experiment 2 in the present research employed a priming methodology. Previous research in this domain has indicated that exposure to a facial expression of emotion can influence subsequent behavior in a manner consistent with the affective valence of that expression. For instance, Murphy and Zajonc (1993) demonstrated that ratings of novel Chinese ideographs were more positive when preceded by a photograph of a smile compared to a frown. Similarly, Ravaja et al. (2004) embedded photographs of facial expressions of emotion into video-clips of news items. Those clips that were accompanied by smiles were perceived as more positive, trustworthy, and interesting than other clips. We sought to identify whether such effects apply differentially to enjoyment and non-enjoyment smiles. For the present task participants were required to categorize the semantic valence (positive or negative) of a series of target words each of which was preceded by a facial expression prime. It has been reported that less time is required to categorize the valence of a word when it has been preceded by a prime with a valence congruent to that of the target (e.g., Fazio et al. 1986). Furthermore, Sternberg et al. (1998) have shown that this effect pertains when participants were primed with facial expressions in that the time taken to identify positive words was facilitated by prior exposure to a smiling face compared to a neutral expression. It is hypothesized that the categorization of target words will be differentially influenced by enjoyment and non-enjoyment smile primes. Specifically, spontaneous enjoyment smile primes, as expressions of a positive emotional state, are predicted to facilitate identification of positive words while deliberate non-enjoyment smiles, as expressions that are affectively similar to neutral facial displays, are not predicted to exhibit this effect.

123

J Nonverbal Behav (2007) 31:259–275

269

Method Facial Displays Three facial displays (a neutral expression, a deliberate non-enjoyment smile, and a spontaneous enjoyment smile) were selected from each of two individuals’ (one female) sets of expression generated in the procedure described above.

Target Words Thirty target words (15 positive, 15 negative) were selected from the Affective Norms for English Words (ANEW) database (Bradley and Lang 1999b). Words were selected on the basis of valence ratings and balanced for frequency of use. Positive words were selected from those rated [7.5 on a 9-point scale, while negative words were selected from those rated \2.5 on the same scale (see Appendix). Selection of words with very clear meanings was intended to minimize deliberation due to uncertainty when interpreting the words. The frequency of use of the positive and negative words (using estimates supplied with the ANEW database), were not significantly different, t(28) = 0.16, p = 0.87 (Mpositive = 66.8, Mnegative = 61.8).

Participants Participants in Experiment 2 were 14 students (7 female) recruited from the University of Canterbury who had not participated in Experiment 1. Each participant was given a $2 lottery ticket upon completion of the procedure.

Design A 3 (facial expression: neutral/non-enjoyment smile/enjoyment smile) · 2 (word valence: positive/negative) within-participants design was employed. The order of facial expression and word presentation were randomized so that all participants saw all combinations of expressions and words.

Apparatus Facial displays and words were presented on a 17@ color CRT computer monitor using software specifically designed for the task (Walton 2003) and a PIII 650 mhz personal computer running Windows XP Professional.

Procedure Participants were invited to take part in an experiment that was described as investigating mood and word recognition and were tested individually. Upon arrival to the laboratory participants were seated approximately 60 cm from the computer monitor. Instructions for

123

270

J Nonverbal Behav (2007) 31:259–275

the task were presented on the computer screen. Participants were informed that they would be seeing a series of words presented individually on the computer screen and their task was to decide, as quickly and accurately as possible, whether each word was positive or negative in meaning, which they should indicate with a key press. They were told that they may see a face appear on the screen, but this was to help orient their attention correctly on each trial so they should ignore the face and concentrate on the word judgment task. In order to help maintain the cover story of the experiment, participants were also required to complete an analogue mood scale prior to the computer task. The task began with a practice session consisting of 8 word judgment trials. Eight words and one facial photograph not used in the experiment itself were used for the practice session. On each trial a fixation cross first appeared in the middle of the screen. After a period, which was varied randomly from 1500 to 3000 ms to avoid anticipatory responses, the cross was replaced by a facial display that remained on the screen for 50 ms. This period is sufficient for accurate detection of emotional state (Dimberg et al. 2000), but does not allow time for any detailed examination. The facial display was immediately replaced by a target word that remained on the screen until the participant responded. After completion of the procedure participants were debriefed and thanked for their time. The entire procedure lasted approximately 20 min.

Data Analysis The data were cleaned and transformed prior to analysis. Initially any incorrect responses (identifying a positive word as negative or vice versa) were eliminated and the distributions of the remaining data were examined for each participant. As expected, a visual inspection suggested that these distributions were not normal, and therefore did not meet the assumptions of ANOVA. Hence a log10 transformation was applied to each participant’s data. After transformation, outliers outside the range: M ± 3.0 SD were removed for each participant as recommended by Uleman et al. (1996). In total 245 (5.8%) incorrect responses and 62 (1.5%) responses identified as outliers were removed from the dataset prior to analysis.

Results and Discussion Median reaction times were calculated for each participant by condition and compared using a 3 (facial expression: neutral/non-enjoyment smile/enjoyment smile) · 2 (word valence: positive/negative) repeated measures ANOVA. Analysis was performed on log10 transformed data, but is reported as raw reaction times (antilogs) to aid interpretation. No main effects were revealed. Importantly, as predicted a significant interaction was revealed between word valence and facial expression, F(2, 26) = 4.97, p \ 0.05, g2p = 0.28. Posthoc comparisons (Tukey a, p \ 0.05) revealed that the time taken to identify negative words did not differ as a function of the facial expression prime that preceded the word. However, the identification of positive words was facilitated by an enjoyment smile prime (Menjoyment = 604 ms) compared to a neutral expression (Mneutral = 633 ms). No significant differences were revealed between the priming effects of enjoyment and non-enjoyment smiles (Mnon-enjoyment = 618 ms), nor between neutral expressions and non-enjoyment smiles on the time taken to identify positive words (see Fig. 2).

123

J Nonverbal Behav (2007) 31:259–275 650

Mean±0.95*SE

640

Reaction time (msecs)

Fig. 2 Graph of time (msecs) taken to categorize positive words as a function of facial expression prime (Experiment 2)

271

630 620 610 600 590 580

Neutral expression Non-enjoyment smile Enjoyment smile Facial Expression Prime

These results demonstrate the priming effects of spontaneous enjoyment smiles on an affectively congruent subsequent task. Compared to the baseline of exposure to a neutral facial expression, exposure to an enjoyment smile facilitated subsequent identification of the target positive words but exposure to a non-enjoyment smile did not. However, although a clear linear trend in the direction of the predicted effect is evident in Fig. 2, there was no equivalent significant difference when the priming effects of enjoyment and non-enjoyment smiles were compared directly. Therefore the present data fall short of direct evidence for perceiver sensitivity to smile type but do indicate a strong trend in this direction. These results may not be wholly unexpected in light of previous research that has demonstrated that generic positively valenced primes can show similar priming effects to those exhibited for non-enjoyment smiles in the present study (e.g., Murphy and Zajonc 1993; Sternberg et al. 1998). For instance, even a target as simple as a line drawing of a smiling face can show similar effects when employed as an affective prime (Stapel et al. 2002). In this sense, any form of smile can be expected to exhibit, to some degree, a priming effect similar to that seen in the present experiment. In addition, the results from Experiment 1 indicated that accuracy at differentiating between enjoyment and nonenjoyment smiles was less than perfect (i.e., 100%) and that when errors were made, participants were most likely to misidentify a non-enjoyment smile as an enjoyment smile as indicated by the direction of the response bias. This would suggest that in the present experiment any errors of perception would have been likely to speed the identification of positive words preceded by a non-enjoyment smile prime, thereby decreasing the likelihood that the priming effects of the two smile types would be significantly different. Taking these issues into consideration, the linear trend apparent in this data is indicative of some sensitivity to smile type exhibited without specific instruction to judge emotional state.

General Discussion It has been suggested, based on a functional account of social perception, that the physiognomic differences between deliberately posed non-enjoyment smiles and spontaneous enjoyment smiles specify, to a suitably sensitive social perceiver, different opportunities for interaction. The present research investigated, across two experiments, the sensitivity of

123

272

J Nonverbal Behav (2007) 31:259–275

social perceivers to this information. The first experiment, a judgment task, revealed a significant level of sensitivity to the distinctions between facial expressions that did and did not specify experience of a positive emotional event. The second experiment, a priming task, showed that, compared to a neutral expression, enjoyment smiles showed an affectively congruent priming effect while non-enjoyment smiles did not, although when compared directly there was no significant difference between the effects of the two smile types. These findings provide further empirical support with respect to the claim that perceivers are sensitive to smile type (cf. Frank et al. 1993; Hess et al. 1989). Experiment 1 revealed clear, statistically significant evidence in support of the sensitivity of perceivers to the differences between enjoyment and non-enjoyment smiles which was corroborated by the trend evident in the second experiment. The discrepancy in the strength of findings between the two experiments may suggest that the time available to perceive the target facial expression constrains perceptual accuracy. The contrast between explicitly requiring participants to judge the qualities of a facial expression (Experiment 1) and asking them to ignore any faces they see (Experiment 2) may indicate that some opportunity for deliberation enhances sensitivity to smile type. Although it is possible that asking participants to ignore the facial expression primes (Experiment 2) may have in fact led them to notice and attend to the face, the limited exposure (50 ms) to these expressions again, perhaps, constrained the information available for perception. Indeed, if, as suggested by Williams et al. (2001), perceiving a smile characteristically entails more than one saccade, then restricting the time available for perception to less than that required for a saccade (typically[100 ms) may place a ceiling on the potential to distinguish between smile types. In turn this may have contributed to lack of significant difference when the priming effects of enjoyment and non-enjoyment smiles were compared directly (Experiment 2). Despite the possible impact of restricting the time available for perception however, the clear linear trend evident in the priming study (Experiment 2) corroborates the findings of the judgment task (Experiment 1) as well as previous literature to support the claim that perceivers are sensitive to the distinctions between enjoyment and non-enjoyment smiles. One important implication of research in this area concerns the ecological validity of the experimental materials employed when considering the social perception of emotion. Given, as demonstrated in the present research as well as previous literature, that perceivers can use the meaningful physiognomic differences between enjoyment and nonenjoyment smiles when judging these expressions, care should be taken to ensure experimental materials accurately reflect the intended real-world referent. In this sense, further research is required to uncover, more specifically, the information that perceivers actually make use of when determining the meaning of a smile. We also suggest that additional research is required in order to establish that sensitivity as examined in laboratory experiments generalizes to the behavioral domain in terms of being manifest in appropriate patterns of interaction. In sum, the functional utility of knowing the meaning of a smile can be realized by social perceivers. The present research has demonstrated that perceivers can detect the emotional state of a smiling individual, which, it has been argued, is important for ensuring effective interaction. Acknowledgments The authors wish to thank Vicki Peace for her assistance with data collection for Experiment 1, Paul Walton for writing the software used in this research, and Dean Owen, Tracey McLellan, and two anonymous reviewers for comments on earlier versions of this manuscript.

123

J Nonverbal Behav (2007) 31:259–275

273

Appendix Materials Facial Display Generation International Affective Digitized Sounds (IADS): Males: #116 (cardinal), # 815 (rock and roll), #201 (erotic female), #352 (sports crowd), #220 (boy laugh), #820 (funk music), #110 (baby laugh), #351 (applause), #202 (erotic female), #353 (baseball), #215 (erotic couple). Females: #812 (choir), #815 (rock and roll), #353 (baseball), #220 (boy laugh), #215 (erotic couple), #820 (funk music), #221 (male laugh), #351 (applause), #110 (baby laugh), #201 (erotic female), #401 (applause) International Affective Picture System (IAPS) Males: #1460 (kitten), #4607 (erotic couple), #8190 (snow skiing), #4180 (erotic female), #2050 (baby), #1750 (rabbits), #2040 (baby), #1920 (dolphin soccer), #2070 (baby), #4220 (erotic female), #2080 (babies), #4250 (erotic female), #1710 (puppies), #4210 (erotic female), #4232 (erotic female), #1440 (seal), #4664 (erotic couple), #4652 (erotic couple), #8510 (car), #2260 (baby). Females: #5760 (garden), #1920 (dolphin soccer), #1440 (seal), #2050 (baby), #1460 (kitten), #1710 (puppies), #2057 (baby), #1610 (rabbit), #1750 (rabbits), #2040 (baby), #2395 (women), #4607 (erotic couple), #5830 (sunset), #2058 (baby), #2070 (baby), #2080 (babies), #2091 (children), #8190 (snow skiing), #2165 (man and baby), #2340 (grandfather and children).

Experiment 2—Target Words Positive: approachable, authentic, decent, friendly, fun, genuine, honest, joy, kiss, love, respectable, sincere, trustworthy, truthful, valid. Negative: bogus, corrupt, deceitful, depressed, devious, dishonest, failure, false, fraud, hate, liar, repulsive, sad, terrible, unreliable.

References Abel, M. H. (2002). The elusive nature of smiling. In M. H. Abel (Ed.), An empirical reflection on the smile (pp. 1–13). Lewiston, NY: Edwin Mellen Press. Adelmann, P. K., & Zajonc, R. B. (1989). Facial efference and the experience of emotion. Annual Review of Psychology, 40, 249–280. Bradley, M. M., & Lang, P. J. (1999a). International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings. Technical Report B-2. Gainesville, FL: The Center for Research in Psychophysiology, University of Florida. Bradley, M. M., & Lang, P. J. (1999b). Affective norms for English words (ANEW). Gainesville, FL: The NIMH Center for the Study of Emotion and Attention, University of Florida. Chartrand, J., & Gosselin, P. (2005). Jugement de l’authenticite´ des sourires et de´tection des indices faciaux. Canadian Journal of Experimental Psychology, 59, 179–189. Darwin, C. (1872/1998). The expression of the emotions in man and animals (3rd ed.). London: Harper Collins.

123

274

J Nonverbal Behav (2007) 31:259–275

Davidson, R. J., Ekman, P., Saron, C. D., Senulis, J. A., & Friesen, W.V. (1990). Approach-withdrawal and cerebral asymmetry: Emotional expression and brain physiology: I. Journal of Personality and Social Psychology, 58, 330–341. Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions to emotional facial expressions. Psychological Science, 11, 86–89. Duchenne, B. (1862/1990). The mechanisms of human facial expression or an electrophysiological analysis of the expression of emotions (A. Cuthbertson, Trans.). New York: Cambridge University Press. Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. K. Cole (Ed.), Nebraska symposium on motivation, 1971 (pp. 207–283). Lincoln: University of Nebraska Press. Ekman, P. (2001). Telling lies: Clues to deceit in the marketplace, politics, and marriage. New York: W. W. Norton. Ekman, P. (2003). Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. New York: Times Books/Henry Holt. Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expression and brain physiology: II. Journal of Personality and Social Psychology, 58, 342–353. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17, 124–129. Ekman, P., & Friesen, W. V. (1982). Felt, false, and miserable smiles. Journal of Nonverbal Behavior, 6, 238–258. Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial Action Coding System [CD-Rom]. Salt Lake City, UT: Nexus. Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles when lying. Journal of Personality and Social Psychology, 54, 414–420. Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., et al. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53, 712–717. Ekman, P., Hager, J.C., & Friesen, W.V. (1981). The symmetry of emotional and deliberate facial actions. Psychophysiology, 18, 101–106. Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969). Pan-cultural elements in facial displays of emotion. Science, 164, 86–88. Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203–235. Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229–238. Fox, N. A., & Davidson, R. J. (1988). Patterns of brain electrical activity during facial signs of emotion in 10-month-old infants. Developmental Psychology, 24, 230–236. Frank, M. G. (2002). Smiles, lies, and emotion. In M. H. Abel (Ed.), An empirical reflection on the smile (pp. 15–43). Lewiston, NY: Edwin Mellen Press. Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64, 83–93. Frank, M. G., & Stennett, J. (2001). The forced-choice paradigm and the perception of facial expressions of emotion. Journal of Personality and Social Psychology, 80, 75–85. Fredrickson, B. L. (1998). What good are positive emotions? Review of General Psychology, 2, 300–319. Frijda, N. H. (1953). The understanding of facial expression of emotion. Acta Psychologica, 9, 294–362. Gosselin, P., Beaupre´, M., & Boissonnneault, A. (2002) Perception of genuine and masking smiles in children and adults: Sensitivity to traces of anger. The Journal of Genetic Psychology, 163, 58–71. Gosselin, P., Perron, M., Legault, M., & Campanella, P. (2002). Children’s and adults’ knowledge of the distinction between enjoyment and nonenjoyment smiles. Journal of Nonverbal Behavior, 26, 83–108. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. Oxford, England: Wiley. Halberstadt, J. B., & Niedenthal, P. M. (1997). Emotional state and the use of stimulus dimensions in judgment. Journal of Personality and Social Psychology, 72, 1017–1033. Harwood, N. K., Hall, L. J., & Shinkfield, A. J. (1999). Recognition of facial emotional expressions from moving and static displays by individuals with mental retardation. American Journal on Mental Retardation, 104, 270–278. Hecht, M. A., & LaFrance, M. (1998). License or obligation to smile: The effect of power and sex on amount and type of smiling. Personality and Social Psychology Bulletin, 24, 1332–1342. Hess, U., Banse, R., & Kappas, A. (1995). The intensity of facial expression is determined by underlying affective state and social situation. Journal of Personality and Social Psychology, 69, 280–288. Hess, U., Kappas, A., McHugo, G. J., Kleck, R. E., & Lanzetta, J. T. (1989). An analysis of the encoding and decoding of spontaneous and posed smiles: The use of facial electromyography. Journal of Nonverbal Behavior, 13, 121–137.

123

J Nonverbal Behav (2007) 31:259–275

275

Hess, U., & Kleck, R. E. (1990). Differentiating emotion elicited and deliberate emotional facial expressions. European Journal of Social Psychology, 20, 369–385. Ikuta, M. (1999). The self-regulatory of facial expression in conflict discourse situation. Japanese Journal of Counselling Science, 32, 43–48. Izard, C. E. (1994). Innate and universal facial expressions: Evidence from developmental and cross-cultural research. Psychological Bulletin, 115, 288–299. Keating, C. F., & Heltman, K. R. (1994). Dominance and deception in children and adults: Are leaders the best misleaders? Personality and Social Psychology Bulletin, 20, 312–321. Keltner, D., & Haidt, J. (1999). Social functions of emotions at four levels of analysis. Cognition and Emotion, 13, 505–521. Krumhuber, E., & Kappas, A. (2005). Moving smiles: The role of dynamic components for the perception of the genuineness of smiles. Journal of Nonverbal Behavior, 29, 3–24. Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2001). International affective picture system (IAPS): Instruction manual and affective ratings. Technical Report A-5. The Center for Research in Psychophysiology, University of Florida. Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user’s guide. New York: Cambridge University Press. Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology, 64, 723–739. Owren, M. J., & Bachorowski, J.-A. (2001). The evolution of emotional experience: A ‘‘selfish-gene’’ account of smiling and laughter in early hominids, humans. In T. J. Mayne & G. A. Bonanno (Eds.), Emotions: Currrent issues and future directions (pp. 152–191). New York: Guilford Press. Peace, V., Miles, L., & Johnston, L. (2006). It doesn’t matter what you wear: The impact of posed and genuine expressions of happiness on product evaluation. Social Cognition, 24, 137–168. Ravaja, N., Kallinen, K., Saari, T., & Keltikangas-Jarvinen, L. (2004). Suboptimal exposure to facial expressions when viewing video messages from a small screen: Effects on emotion, attention, and memory. Journal of Experimental Psychology: Applied, 10, 120–113. Rinn, W. E. (1984). The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95, 52–77. Scherer, K. R., & Ceschi, G. (2000). Criteria for emotion recognition from verbal and nonverbal expression: Studying baggage loss in the airport. Personality and Social Psychology Bulletin, 26, 327–339. Schmidt, K. L., Ambadar, Z., Cohn, J. F., & Reed, L. I. (2006). Movement differences between deliberate and spontaneous facial expressions: Zygomaticus major action in smiling. Journal of Nonverbal Behavior, 30, 37–52. Schmidt, K. L., Cohn, J. F., & Tian, Y. (2003). Signal characteristics of spontaneous facial expressions: Automatic movement in solitary and social smiles. Biological Psychology, 65, 49–66. Shiota, M. N., Campos, B., Keltner, D., & Hertenstein, M. J. (2004). Positive emotion and the regulation of interpersonal relationships. In P. Philippot & R. S. Feldman (Eds.), The regulation of emotion (pp. 127– 155). Mahwah, NJ: Erlbaum. Skinner, M., & Mullen, B. (1991). Facial asymmetry in emotional expression: A meta-analysis of research. British Journal of Social Psychology, 30, 113–124. Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50. Soussignan, R. (2002). Duchenne smile, emotional experience, and autonomic reactivity: A test of the facial feedback hypothesis. Emotion, 2, 52–74. Stapel, D. A., Koomen, W., & Ruys, K. I. (2002). The effects of diffuse and distinct affect. Journal of Personality and Social Psychology, 83, 60–74. Sternberg, G., Wiking, S., & Dahl, M. (1998). Judging words at face value: Interference in a word processing task reveals automatic processing of affective facial expressions. Cognition and Emotion, 12, 755–782. Surakka, V., & Hietanen, J. K. (1998). Facial and emotional reactions to Duchenne and non-Duchenne smiles. International Journal of Psychophysiology, 29, 23–33. Uleman, J. S., Hon, A., Roman, R. J., & Moskowitz, G. B. (1996). On-line evidence for spontaneous trait inferences at encoding. Personality and Social Psychology Bulletin, 22, 377–394. Walton, P. R. (2003). The Lexical Decision Computer Task (Version 1.7.21) [Computer software]. Christchurch, New Zealand: Dexterware. Williams, L. M., Senior, C., David, A. S., Loughland, C. M., & Gordon, E. (2001). In search of the ‘‘Duchenne Smile’: Evidence from eye movements. Journal of Psychophysiology, 15, 122–127.

123