Examining the Impact of Administration Medium on Examinee ...

5 downloads 0 Views 861KB Size Report
and stereo sound to assess critical job-relevant skills and diagnose training needs; such .... assessments can provide realistic job previews to applicants, be-.
Journal of Applied Psychology 2000, Vol. 85, No. 6, 880-887

Copyright 2000 by the American Psychological Association, Inc. 0021-9010/00/S5.00 DOl: 10.1037/AJ021-9010.85.6.880

Examining the Impact of Administration Medium on Examinee Perceptions and Attitudes Wendy L. Richman-Hirsch

Julie B. Olson-Buchanan

William M. Mercer, Inc.

California State University, Fresno

Fritz Drasgow This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

University of Illinois at Urbana-Champaign

The present study explored the impact of administration medium on examinees' affective reactions. The research compared managers' reactions with 3 versions of the Conflict Resolution Skills Assessment (J. B. Olson-Buchanan et al., 1998) that were identical in content but varied in terms of the level of technology used: a paper-and-pencil form, a written form administered by computer (i.e., a computerized page-turner), and a multimedia form administered by computer. Managers completing the multimedia assessment perceived the assessment as more face valid and had more positive attitudes, relative to managers who completed the other 2 assessments. Computerization, however, was not enough to make a difference; instead, it was the multimedia nature of the computer presentation that resulted in the most positive affective reactions. Study limitations and implications for research and practice are discussed.

Computerization has a growing impact on personnel selection and diagnostic assessment. "Computers offer efficiency advantages over traditional formats such as reducing transcription errors, and make possible new measurement options such as interactive branching, personalized probes, and provision of explanatory material and on-line help" (Richman, Kiesler, Weisband, & Drasgow, 1999). Because of these and other advantages, many researchers and employers have converted paper-and-pencil tests to computerized formats or have developed innovative new types of assessments. Chapters in Drasgow and Olson-Buchanan (1999), for example, describe applications ranging from the computerization of the Graduate Record Exam (Mills, 1999) to the assessment of conflict resolution skills (Drasgow, Olson-Buchanan, & Moberg, 1999) and musical aptitude (Vispoel, 1999). Recent technological advances have led researchers to explore the development and application of various multimedia diagnostic assessments. Multimedia computerized assessments, as compared with traditional computer-based tests, can use full-motion video and stereo sound to assess critical job-relevant skills and diagnose training needs; such instruments can assess abilities and social

Wendy L. Richman-Hirsch, William M. Mercer, Inc., New York; Julie B. Olson-Buchanan, Department of Management, California State University, Fresno; Fritz Drasgow, Department of Psychology, University of Illinois at Urbana-Champaign. The research reported here was supported in part by the Craig School of Business Research and Scholarly Activity Grant. We are grateful to the undergraduate research assistants from California State University, Fresno, for their assistance. Correspondence concerning this article should be addressed to Julie B. Olson-Buchanan, Department of Management, California State University, 5245 North Backer Avenue, M/S PB 7, Fresno, California 93740-8001. Electronic mail may be sent to [email protected].

skills that have typically been measured poorly by traditional paper-and-pencil tests (Olson-Buchanan & Drasgow, 1999). The technological advances in multimedia assessment provide the key to the present research; such advances highlight the need for a study of the potential advantages of computerized multimedia assessments in the workplace compared with traditional paperand-pencil and computer-based tests. Multimedia technology (at least in laser disk form) has been available for nearly 10 years, yet relatively little information about its use for assessment has been published (see Ackerman, Evans, Park, Tamassia, & Turner, 1999; Hanson, Borman, Mogilka, Manning, & Hedge, 1999; and Vispoel, 1999, for notable exceptions). However, the topic of multimedia assessment has received attention at professional conferences. Researchers have described their work on a variety of issues, including the logistics of managing the development of such assessments (e.g., Dyer, Desmarais, Midkiff, Colihan, & Olson, 1992), scoring (e.g., Ashworth & Joyce, 1994; Olson & Keenan, 1994), and criterion-related validity (e.g., Donovan, Drasgow, & Bergman, 1998; Masi & Desmarais, 1996; Olson-Buchanan, Drasgow, Moberg, & Donovan, 1996). The findings from this research have demonstrated several desirable features of multimedia assessments. For example, OlsonBuchanan et al. (1998) found that the Conflict Resolution Skills Assessment, unlike many paper-and-pencil selection tests, had no adverse impact on women or minorities. Hanson et al. (1999) presented impressive construct validity evidence for their multimedia work-sample tests. Multimedia assessments have also been shown to provide incremental validity when used in conjunction with traditional cognitive ability tests (Olson-Buchanan et al., 1998) and seem likely to predict aspects of the criterion space (e.g., interpersonal relations) that are not easily predicted by cognitive ability.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

MEDIUM OF ADMINISTRATION

881

How much technology is really needed? Could the benefits just listed have been obtained by alternative means? Motowidlo, Dunnette, and Carter (1990), for example, described the "low fidelity simulation" (p. 640), which consists of workplace situations described in written form. Moreover, the validity coefficients reported for multimedia assessments have not been higher than those typically obtained for carefully developed cognitive ability tests. The research reported in this article explicitly examines one of the hypothesized benefits of multimedia assessment: applicant reactions. It seems reasonable to believe that multimedia assessments result in higher interest among applicants and trainees. However, very little research has examined this presumed benefit. Given the large investment required for multimedia assessment, an investigation into applicants' reactions to multimedia assessment seems important. Our study compares examinees' impressions of three versions of the Conflict Resolution Skills Assessment (Olson-Buchanan et al., 1998) that are identical in content but vary in terms of the level of technology used: a paper-and-pencil form, a written form administered by computer (herein referred to as a computerized pageturner), and the original full-motion video form administered by computer. A unique feature of this study is that all three versions of the assessment were interactive. That is, assessees' responses to the first part of each question determined the stimulus they received in the second part of the question.

Studies have shown that recruiting practices can influence the pursuit or acceptance of job offers (Harris & Fink, 1987; Mecan et al., 1994; Powell, 1991; Rynes, Bretz, & Gerhart, 1991; Schmitt & Coyle, 1976), and the image that is created by a given selection procedure may affect the organization's ability to attract and recruit qualified applicants (Smither et al., 1993). Job applicants' impressions of the selection procedures also have ethical and legal implications, because when selection devices are viewed as invalid, offensive, or intrusive, complaints and sometimes legal action may arise (e.g., Cascio, 1991; Rynes & Connerley, 1993; Smither et al., 1993).

Examinee Reactions

Face validity refers to the extent to which examinees perceive an assessment instrument as related to the job. According to several researchers, there are three facets of face validity: perceived content validity, perceived predictive validity, and the extent to which examinees feel that the assessment provides relevant information about the job (Gilliland, 1993; Mecan et al., 1994; Smither et al., 1993). Kudisch, Poole, Dobbins, and Ladd (1995) and Smither et al. (1993) both found that examinees perceive simulations to be the most face valid of selection tools. Kudisch et al. (1995) argued that simulations and assessment centers are viewed as highly content valid because they are "designed to emulate actual job situations" (p. 1). Furthermore, it has been suggested that computerized video assessments can provide realistic job previews to applicants, because actual workplace situations are displayed (Drasgow, Olson, Keenan, Moberg, & Mead, 1993). Because paper-and-pencil assessments and written forms administered by computer (computerized page-turners) involve reading and interpreting the written word and full-motion video assessments involve interpreting both verbal and nonverbal visual cues, we predict that examinees will perceive the multimedia assessments as more face valid, compared with both paper-and-pencil assessments and computerized pageturners.

One very important potential benefit of multimedia assessment is examinees' affective reactions. Researchers have investigated examinees' reactions to drug testing, cognitive ability testing, employment interviews, assessment centers, and even computerbased testing (e.g., Crant & Bateman, 1990, 1993; Harris & Fink, 1987; Kluger & Rothstein, 1993; Martin & Nagao, 1989; Mecan, Avedon, Paese, & Smith, 1994; Rynes & Connerley, 1993; Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993). Chan and Schmitt (1996) examined whether examinees' perceptions of an assessment are affected by the medium in which the assessment is delivered. Specifically, they compared examinees' reactions to a video-based (video cassette recorder, or VCR) assessment with reactions to a paper-and-pencil version of the same assessment. However, neither of these assessments was computer administered. Thus, examinees' reactions to computerized video assessments remain largely unexplored.

Importance of Examinee Reactions The importance of examinees' reactions to assessment procedures has been identified in several studies. Researchers have demonstrated that negative reactions to assessment procedures may reduce motivation to perform well on the test (Arvey, Strickland, Drauden, & Martin, 1990) and subsequently bias the test scores; such reactions may ultimately affect the validity of the assessment. In addition, researchers have found that job applicants' impressions of the assessment process affect their attraction to the organization, the likelihood of litigation, and the utility of the assessment instrument (e.g., Smither et al., 1993). Selection procedures are an important source of information for applicants about the organization (Mecan et al., 1994; Smither et al., 1993).

Perceptions and Attitudes Following the framework of Mecan et al. (1994), the present study makes a distinction between examinees' perceptions, which describe the particular assessment procedure (e.g., face validity and perceived fairness), and examinees' overall attitudes, which indicate how well they liked the assessment and the assessment process in general. We hypothesize that examinees have differential perceptions and attitudes to the assessment depending on the medium of the test administration (e.g., paper-and-pencil, computerized page-turner, or computerized multimedia).

Perceptions: Face Validity and Perceived Fairness

Hypothesis 1. Examinees perceive the multimedia assessment as having more content validity and predictive validity, as well as providing more relevant information about the job, compared with the paperand-pencil and computerized page-turner assessments. There are no differences between the paper-and-pencil and the computerized pageturner assessments. According to organizational justice theory, the overall fairness of a diagnostic instrument is determined by both procedural and distributive justice (e.g., Gilliland, 1993; Smither et al., 1993). Procedural justice refers to examinees' perceptions of the fairness

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

882

RICHMAN-HIRSCH, OLSON-BUCHANAN, AND DRASGOW

of the assessment procedure or process, whereas distributive justice refers to examinees' perceptions of the fairness of the organizational outcome distributions (e.g., fairness of the assessment test scores). As suggested previously, multimedia assessments can provide realistic job previews and accurate expectations for future training initiatives by displaying actual workplace scenarios. Multimedia assessments use full-motion video with high-fidelity simulations that "permit the test-taker to appear to be a participant" in the social interaction (Drasgow et al., 1993, p. 201). Therefore, we expect that examinees will perceive the multimedia assessment as more procedurally and distributively fair, because they are responding to real job situations and their answers are in direct response to watching a job scenario being acted out on the computer screen. In comparison, we expect that examinees will perceive the paper-and-pencil and the computerized page-turner assessments as less fair, because their answers are in response to reading a job situation, without the additional verbal and nonverbal cues that result from watching actual behaviors. That is, examinees will perceive these assessments to be less fair because the medium by which they learn about the job situation in the assessment (e.g., by reading) is substantially different from the medium by which they would learn about such job situations on the job (e.g., by watching and listening). Hypothesis 2. Examinees perceive the multimedia assessment as more procedurally and distributively just than both the paper-and-pencil and the computerized page-turner assessments. There are no differences between the paper-and-pencil and the computerized page-turner assessments.

Attitudes: Enjoyment, Shortness, Satisfaction, and Modernization Previous research on examinees' reactions to computerized assessments has found that examinees respond favorably to computerized tests (e.g., Burke, Normand, & Raju, 1987; Hedl, O'Neil, & Hansen, 1973; Schmidt, Urry, & Gugel, 1978; Schmitt, Gilliland, Landis, & Devine, 1993; Skinner, Allen, Mclntosh, & Palmer, 1985) and report greater motivation on such tests (e.g., Arvey et al., 1990). Computerized tests are also perceived as more interesting, more relaxing, and shorter than paper-and-pencil tests (e.g., Lukin, Dowd, Plake, & Kraft, 1985; Millstein, 1987; Rozensky, Honor, Rasinski, Tovian, & Herz, 1986; Skinner & Allen, 1983). Therefore, we expect the two computerized assessments to be viewed as more enjoyable and short, and result in greater satisfaction in the assessment process, than the paper-and-pencil assessment. Furthermore, the limited research that does exist on multimedia assessments has revealed that examinees respond positively to multimedia tests (Dyer, Desmarais, & Midkiff, 1993; McHenry & Schmitt, 1994). Because the multimedia assessment involves more human senses (e.g., vision, hearing) and requires a lesser cognitive component (e.g., less reading) than the computerized page-turner and the paper-and-pencil assessments do, we expect it to yield the most positive attitudes of the three media. Consequently, we make the following predictions regarding media of administration and examinee attitudes. Hypothesis 3. Examinees completing the multimedia assessment and the computerized page-turner assessment find the assessments more

enjoyable, interesting, and shorter and are more satisfied with the assessment process than are examinees completing the paper-andpencil assessment. The multimedia assessment results in the most positive attitudes of the three assessments.

An organization's use of computers for assessment purposes may create an image that the organization possesses advanced technical knowledge and treats its examinees with professionalism. Consequently, we expect that the two computerized assessments will be viewed as more modern than the paperand-pencil assessment. Hypothesis 4. Examinees completing the multimedia assessment and the computerized page-turner assessment perceive the assessments as more up-to-date, modern diagnostic devices, compared with the examinees completing the paper-and-pencil assessment. The multimedia assessment is viewed as the most modern.

Method Participants Participants in this study were 131 managers from several organizations, ranging from manufacturing to retail. The majority of the participants were White (71%), middle-aged (M = 42 years) men (69%). Participants had been with their organization an average of 6 years and supervised approximately eight people.

Procedure Participants were randomly assigned to complete either the multimedia assessment, the computerized page-turner assessment, or the paper-andpencil assessment. The linguistic content of the three assessments was identical; only the medium of administration varied. Participants were unaware of the two conditions to which they were not assigned; the assessments were administered in separate rooms. Immediately following the assessment, participants completed a reactions survey. They were then debriefed on the nature of the study and thanked for their time.

Assessments Computerized multimedia. The multimedia assessment used an interactive video to present a variety of scenes depicting workplace conflict situations on the computer screen (Olson-Buchanan et al., 1998). The assessment contained nine main conflict situations and four branches for each main scene. First, a main scene depicting a conflict incident was presented. At a critical juncture, the scene was frozen and four multiplechoice options for addressing the conflict were provided in a written format. The assessee was asked to choose the option that he or she believed would provide the best way to deal with the dispute. Depending on the option chosen, the computer branched to an extension of the first scene depicting a likely outcome of the option the participant selected (e.g., how the events might unfold). Again, the conflict escalated, the scene froze, four options for addressing the conflict were presented, and the examinee was asked to decide which option would best resolve the conflict. The computer then branched to an entirely new conflict scene. Olson-Buchanan et al.'s (1998) hybrid key was used to score responses to the multimedia assessment, as well as to the other two versions. Computerized page-turner. The content of the computerized pageturner and the paper-and-pencil assessment was identical to that of the multimedia assessment, but the medium of administration differed. For the computerized page-turner, the workplace scenarios were presented on the computer screen in written, dialogue form (e.g., similar to a script). Examinees were able to return to previous screens to reread preceding

883

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

MEDIUM OF ADMINISTRATION sections of the dialogue; however, they were not able to change their responses to previous scenes. As with the multimedia assessment, examinees were prompted to indicate how they would deal with each dispute. Depending on the individual's response, the computer branched to a scene portraying a likely outcome. Paper-and-pencil. Similar to the computerized page-turner, the paperand-pencil version presented workplace scenarios in written, dialogue form. Depending on their response to each conflict, the examinees were instructed to turn to a specific set of pages that described a likely outcome of their response. Each set of pages describing a potential outcome of a scenario was sealed so that examinees could not preview the likely outcomes before responding to the initial scenario. Respondents were instructed to break the seal on only one of the likely outcomes for each given scenario. (We could detect cheating by looking for more than one broken seal per scenario.)

Measures Following completion of the conflict assessment, participants completed a survey assessing their perceptions and attitudes. Participants responded to each survey item using a 10-point Likert scale ranging from 1 (strongly disagree) to 10 (strongly agree). Many of the items were adapted from Smither et al. (1993) and Mecan et al. (1994). Perceptions: Face validity and fairness perceptions. To assess perceptions, we used two overall measures: face validity (composed of content validity, predictive validity, and the extent to which the assessment provided relevant information about the job) and perceived fairness (comprised of procedural and distributive justice). Content and predictive validity were each assessed using five items (e.g., "The actual content of the exercise was clearly related to the job" and "I am confident that the exercise predicts how well people manage conflict on the job," respectively), and the amount of relevant information was assessed using four items (e.g., "The exercise gave information relevant to the content of a training program in conflict resolution skills"). Lastly, procedural justice was assessed using four items (e.g., "Overall, I believe that the exercise was fair"), and distributive justice was assessed using three items (e.g., "I think I will deserve the results that I receive on the exercise"). Attitudes: Enjoyment, shortness, satisfaction, and modernization. Five items were used to assess the extent to which the assessment was interesting and enjoyable (e.g., "This exercise was interesting"). Four items were used to determine how short the test was perceived to be (e.g., "This exercise was short"). Satisfaction with the assessment process was assessed using four items (e.g., "So far, participation in the assessment process has

been a positive experience"). Lastly, modernization was measured using two items (e.g., "Modern companies are using this type of exercise"). Conflict assessment. For all three versions of the conflict assessment, scoring was based on an integration of a model-based scoring procedure and an empirical-based scoring procedure. For a detailed description of this hybrid keying procedure, see Olson-Buchanan et al. (1998). Higher scores on the assessment indicate more effective conflict resolution skills.

Results Overview We hypothesized that examinees completing the multimedia assessment would have more positive perceptions (Hypotheses 1 and 2) and attitudes (Hypotheses 3 and 4) than would examinees completing either the paper-and-pencil or the computerized pageturner assessments.

Descriptive Statistics Table 1 presents the means, standard deviations, and intercorrelations of the study variables. Examination of Table 1 indicates that many of the correlations among the variables are moderate and significant. It is interesting to note that none of the reaction measures were significantly related to the examinees' performance on the conflict assessment. This finding is consistent with two previous studies (Arvey et al., 1990; Mecan et al., 1994), yet is inconsistent with two others (Bauer et al., 1998; Smither et al., 1993). For example, Mecan et al. (1994) found that actual test scores were minimally or nonsignificantly related to examinees' perceptions of the assessment procedure, whereas Smither et al. (1993) found that performance scores were positively related to perceptions.

Medium of Administration Multivariate analysis of variance was used to examine the hypotheses, and examinees' perceptions and attitudes served as the dependent variables. To examine the predicted differences between medium of administration groups, we created two contrast

Table 1 Means, Standard Deviations, Scale Reliabilities, and Intercorrelations of Study Variables Variable Perception 1. Content validity 2. Predictive validity 3. Relevant information 4. Procedural justice 5. Distributive justice Attitude 6. Enjoyment 7. Shortness 8. Satisfaction with process 9. Modernization Performance 10. Conflict Score

M

SD

a

No. items

1

27.50 20.69 21.87 22.86 14.21

4.36 4.52 2.97 2.90 3.15

.54 .63 .66

5 5 4

.18 .15

.66 .77

4 3

27.11 19.50 22.16 10.28

4.52 3.64 3.45 1.68

.86 .76 .84 .43

5 4 4 2

2.82

3.03

n/a

n/a

Note, n/a indicates that this statistic is not applicable. N = 131. * p < .05. **p < .01.

.07 -.01

.10 .09 .10 -.09

.11

10

3

29** .42** 49**

.41**



.17

.47**



.24* .21* .23* .21*

.35** .33** .21*

.52** .24* .57** 29**

.47** .22* .40**

.10

.27** .76** 43**

.03

.01

.10

.17

.19

.14

— — .38** 32**

-.05

— 45** .12

_ .01



This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

884

RICHMAN-HIRSCH, OLSON-BUCHANAN, AND DRASOOW

variables, one comparing the multimedia assessment with the other two assessments, and another comparing the paper-and-pencil assessment with the two computerized assessments. Effect sizes (d) were computed to assess the magnitude of significant differences. As Table 2 shows, the overall multivariate test statistic comparing the multimedia assessment with the other two assessments on examinees' perceptions was significant, multivariate F(2, 127) = 2.74, p < .05. Examinees completing the multimedia assessment viewed the exercise as more face valid; it was perceived as more content valid (M = 29.17) and more predictively valid (M = 21.66), compared with the pooled average of the other two administration media (M = 26.92, d = 0.52, p < .01, and M = 20.14, d = 0.34, p < .05, respectively). In addition, the multimedia assessment was viewed as providing more relevant information about the job (M = 22.46) than the other assessments did (M = 21.58, d = 0.29, p < .05). These findings support Hypothesis 1; the multimedia assessment was perceived as more face valid. It is important to note that the contrast between the paper-and-pencil assessment and the two computerized assessments did not result in a significant difference in examinee perceptions; computerization alone did not result in enhanced perceptions. Minimal support was found for Hypothesis 2, which stated that the multimedia assessment would be perceived as more fair; however, the trends were in the hypothesized direction. The multimedia assessment was perceived as slightly more procedurally just (M = 23.46) than the other two administration media were (M = 22.62). As Table 3 shows, the overall test comparing the multimedia assessment with the other two assessments on examinees' attitudes was significant, multivariate F(2, 127) = 3.22, p < .05. The multimedia assessment was viewed as more enjoyable (M = 28.56) and shorter (M = 20.71) than the other two assessments (M = 26.69, d = 0.44, p < .05, and M = 18.49, d = 0.60, p < .01, respectively). Examinees completing the multimedia assessment were also more satisfied with the assessment process (M = 23.29) than were examinees completing the paper-and-pencil and computerized page-turner assessments (M = 21.69, d = 0.48, p < .05). These results provide support for Hypothesis 3, that the multimedia assessment induces more positive attitudes. However, contrary to our prediction, there were no significant differences between the computerized page-turner and the paper-and-pencil assessment. Again, it appears that computerization per se did not result in more positive attitudes than the paper-and-pencil assessment did.

Support was also found for Hypothesis 4; the multimedia assessment was viewed as more modern (M = 10.48) than the other two assessments were (M = 10.00, d = 0.29, p < .05). It is not surprising that there was no significant difference between the two computerized assessments and the paper-and-pencil assessment; unlike traditional paper-and-pencil tests, the paper-and-pencil assessment used in this study was adaptive and used colored tabs for branching.

Discussion Despite the proliferation of computers in organizations today, the amount of published research on the use of computers for selection and training has been scant. Indeed, some organizations have already invested a great deal of time and money into developing high-tech computerized assessments without any empirical evidence that examinees prefer these types of assessments. This study is designed to fill this research gap by exploring the impact of the type of technology used to administer an assessment on examinees' perceptions and attitudes. This study demonstrates that medium of test administration does matter. The multimedia version of the assessment yielded more positive reactions than the computerized page-turner and the paper-and-pencil versions did, even though the linguistic content of the three assessments was identical. Managers completing the multimedia assessment perceived the assessment as more content and predictively valid and felt that it provided more relevant information about the job. In addition, managers who completed the multimedia assessment found it to be both more enjoyable and shorter and were more satisfied with the assessment process. Computerization per se was not enough to make a difference. The computerized page-turner did not result in enhanced attitudes or perceptions over those of the paper-and-pencil assessment. Evidently, it was the multimedia features of the computer presentation that resulted in the most positive affective reactions. Perhaps simple computerization is too mundane to be noticed by today's computer-savvy workforce.

Implications for Research and Practice A logical question is, Do examinees' reactions to an assessment really matter? For example, do these reactions have any implications for the use of assessments in recruitment and selection? Previous research has demonstrated a relation between examinee reactions to an assessment and such variables as the motivation to perform well on the assessment (Arvey et al., 1990), attraction to

Table 2 Multivariate and Univariate F Statistics for Medium of Administration on Perceptions Dependent variable Independent variable

Error df

Multivariate F

Video vs. paper and page-turner Paper vs. video and page-turner R2

123 123

2.74* 0.75

Content validity

Predictive validity

Relevant information

Procedural justice

Distributive justice

9.05** 1.30 .07

3.66* 0.27 .03

4.86* 2.95 .04

2.14 0.03 .02

0.01 0.04 .00

Note. The statistics reported for the dependent variables are univariate F tests. * p < . 0 5 . **p