Journal of Marketing Education

7 downloads 3144 Views 127KB Size Report
Email Alerts: ... referred to as social loafers or free riders (D. L. Williams et al., 1991; Mello, 1993; ..... ing student would be sent an e-mail from the professor that.
Journal of http://jmd.sagepub.com/ Marketing Education

Using the Diary Method to Deal With Social Loafers on the Group Project: Its Effects on Peer Evaluations, Group Behavior, and Attitudes Curt J. Dommeyer Journal of Marketing Education 2007 29: 175 DOI: 10.1177/0273475307302019 The online version of this article can be found at: http://jmd.sagepub.com/content/29/2/175

Published by: http://www.sagepublications.com

Additional services and information for Journal of Marketing Education can be found at: Email Alerts: http://jmd.sagepub.com/cgi/alerts Subscriptions: http://jmd.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://jmd.sagepub.com/content/29/2/175.refs.html

>> Version of Record - Jul 30, 2007 What is This?

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

Using the Diary Method to Deal With Social Loafers on the Group Project: Its Effects on Peer Evaluations, Group Behavior, and Attitudes Curt J. Dommeyer

This article reports on the use of group and individual diaries to control social loafing on the group project. Although both forms of the diary were designed to prevent social loafing, neither appeared to do so. An unexpected result of the individual diaries is that they appeared to make the majority of the class, namely the “nonloafers,” more aware of and less tolerant of their loafing partners. Consequently, firings of group members were most likely to occur when individual diaries were used. This article shows how the diary method affects peer evaluations and group behavior, investigates the convergent validity of the individual diary, and summarizes students’ attitudes toward the diary method.

Keywords:

loafer; free rider; diary; peer evaluation INTRODUCTION

Business students are often required to conduct group projects. The group environment allows students to work on a complex, challenging project (D. L. Williams, Beard, & Rymer, 1991; Dommeyer, 1986; Goretsky, 1984; Henke, 1985); practice communication, organizational, and technical skills (D. L. Williams et al., 1991); and learn from their peers (D. L. Williams et al., 1991). Unfortunately, a group’s effectiveness can be seriously curtailed if it contains one or more members who do not contribute fairly to the group’s goals. Group members who shirk their obligations in the hopes of benefiting from the work of others are often referred to as social loafers or free riders (D. L. Williams et al., 1991; Mello, 1993; Strong & Anderson, 1990). This article will review the social loafing literature and report on a method that is designed to prevent social loafing on the group project, namely, the diary method. Social loafing appears to be common in group projects. Surveys of students reveal that students’ biggest complaint regarding group projects is having to deal with social loafers, and most students indicate that they have had experience with social loafers (Abernethy & Lett, 2005; McCorkle et al., 1999; McLaughlin & Fennick, 1987). In their survey of 70

students, Abernethy and Lett (2005) found that the second most significant predictor of students’ satisfaction with group work was the absence of a free-rider problem. A variety of explanations has been offered to explain social loafing. An obvious explanation is that the student is lazy and attempting to avoid work (D. L. Williams et al., 1991). However, the real causes for social loafing may be more subtle or complex. Some have conjectured that social loafing is due to low self-esteem (Shepperd, 1993), lack of an appropriate incentive (Albanese & Van Fleet, 1985; Shepperd & Wright, 1989), high expectations of one’s partners’ performances (K. Williams & Karau, 1991), or dominant partners who will not allow others to contribute (Beatty, Haas, & Sciglimpaglia, 1996; D. L. Williams et al., 1991). Another explanation relates to the fact that the social loafing of one partner can lead to the social loafing of others. Some potential “nonslackers” for example may not want to waste their time working on a project that is doomed to fail with the lack of effort from one or more slackers (Grieb & Pharr, 2001). For a group with slackers to succeed, one or more persons in the group would have to do more than their fair share of the work. No one wants to be the “sucker” who does all the work (Kerr, 1983; Robbins, 1995). Other slackers apparently take an economic perspective, rationalizing that the benefits-to-costs ratio of loafing exceeds that of working (Albanese & Van Fleet, 1985). There is no doubt that social loafing in some cases is caused by unique characteristics of a student that make it difficult for him or her to make a complete contribution to the group project, for example,

Curt J. Dommeyer, PhD (University of Cincinnati), is a professor of marketing at California State University, Northridge. His research interests include pedagogy, survey methodology, consumer privacy, and business etiquette. This research was supported by a research grant from the Center for Excellence in Learning and Teaching at California State University, Northridge. The author thanks his wife, Susan, for her assistance in data coding, data entry, and word processing. Journal of Marketing Education, Vol. 29 No. 2, August 2007 175-188 DOI: 10.1177/0273475307302019 © 2007 Sage Publications

175 Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

176

AUGUST 2007

language barriers, cultural differences, learning disabilities, physical or mental problems, personality traits, or time constraints. Some researchers believe that social loafing is more likely to occur when the group size is large (Kerr, 1983; Strong & Anderson, 1990). Support for this relationship comes from North, Linley, and Hargreaves (2000), who compared the creativity of two group sizes—those of three students and those of eight students—and found that the productivity per individual was on average lower for the larger group size. Similarly, Latane, Williams, and Harkins (1979) found that as group size increased, the amount of individual effort on average declined on a rope pulling task. A study by Ingham, Levinger, and Peckham (1974) however revealed that the relationship between group size and social loafing is curvilinear, namely, social loafing increases as group size goes from one to three but then declines with larger group sizes. To prevent social loafing on the group project, social psychologists believe that the following features must be in effect: (a) Group goals must be challenging and motivating, (b) individual accountability must be achieved through a monitoring procedure that has the ability to detect each group member’s contributions or lack of contributions, and (c) a reward system must be used that properly reflects individual contributions (Myers, 1990). Although instructors should have little difficulty finding tasks that are motivating and challenging for their groups, they may find it difficult to determine individual contributions on a group project because they have little firsthand knowledge of group members’ activities (Brooks & Ammons, 2003; Chen & Lou, 2004; Haas, Hass, & Wotruba, 1998). Consequently, it is necessary for instructors to monitor group member contributions through one or more methods. D. L. Williams et al. (1991) recommended a variety of monitoring techniques, namely, peer evaluations, instructor observations, working papers, meeting reports, interaction logs, and confidential memos to the instructor. Of these techniques, peer evaluations are undoubtedly the most popular method for assessing group member contributions. Another way to monitor group member behavior is through the use of either a group or individual diary. When the diary method is used, groups or group members are asked to record on a regular basis their assignments, accomplishments, and problems. The diary method has several advantages over closed response peer evaluations. First, because the diary is open ended, students are free to comment on any aspect of the group or project. They are not constrained by the evaluative criteria that are set forth in structured peer evaluations. Second, structured peer evaluations can be completed without much thought, thereby leading to invalid evaluations. With a diary, a student must make a modicum of mental effort to write an evaluation. Although there is no guarantee that what the student writes is true, anecdotal information indicates that students provide more honest evaluations of themselves and their peers in a diary than through closed response peer evaluations

(D. L. Williams et al., 1991). Third, when one monitors one’s own behavior with a diary, this self-monitoring process can have reactive effects. That is, the act of recording one’s own behavior in a diary can make a person more aware of his or her own behavior. This heightened awareness can lead to a change in behavior (Snider, 1987). Moreover, the self-monitoring of one’s behavior may lead one to covertly control his or her behavior through self-reinforcement or self-punishment (Kanfer, 1970). Regardless of the psychological mechanisms involved, it is clear that self-monitoring can affect behavior. It has been used to promote better behavior of adolescents in the classroom (Broden, Hall, & Mitts, 1971; Freeman & DexterMazza, 2004; Todd, Horner, & Sugai, 1999), improve the academic performance of students (DiGangi, Maag, & Rutherford, 1991), help college women maintain an exercise program (Forsyth, 1998), and develop better study habits among students (Richards, McReynolds, Holt, & Sexton, 1976). It is assumed that if groups or group members maintain a diary of their group’s activities, this process will have reactive effects that may decrease social loafing. The diary maintenance procedure should make group members more aware that they are being watched, namely, they are being watched by themselves, their group members, and the instructor. With all this monitoring, any potential slacker should realize that it will be difficult to hide a negative project performance and consequently should be more motivated to perform satisfactorily on the group project. The use of diaries to monitor group member performance has not received much attention in the academic literature. Although both McCorkle et al. (1999) and D. L. Williams et al. (1991) recommended that groups should maintain a record or file of their activities, they did not indicate the effectiveness of these techniques or reveal how students felt about them. Batra, Walvoord, and Krishnan (1997) studied nine group pedagogical tools over a four-semester period, one of which was a personal journal requirement for each group member. They received such unfavorable student feedback about the usefulness of the journal that they dropped the journal requirement during the fourth semester and concluded that the personal journal was not worth pursuing as a pedagogical tool. In a similar fashion, Belanger and Greer (1992) found that their students had no enthusiasm for maintaining a personal journal. Dommeyer and Lammers (2006) in contrast found that requiring their student groups to complete a group diary over the semester received a mixed review. Overall, their students felt that neither they nor the instructor benefited much from the diary assignment. Yet, of those who gave their general reaction to the diary assignment (n = 45), 76% felt it was a good idea. Moreover, 25% of the students reported that the diary increased their contribution to the project, and 16% felt that the diary increased the contributions from some group members. This article will report on the use of group and individual diaries to control social loafing on a marketing research project. The effect of the diary method on peer evaluations

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

and group behavior will be reported. In addition, an attempt will be made to establish the convergent validity for the diary method by comparing diary evaluations to structured peer evaluations. Finally, students’ attitudes toward the diary method will be revealed.

177

demonstrated a roughly equal contribution to the project would be awarded the group grade; students who received a rating that reflected less than an equal contribution would be given an individual project grade that was lower than the group grade. No student would be awarded a grade that exceeded the group grade.

METHOD This project was conducted in the Los Angeles area at a California state university that serves approximately 33,000 students. Data were collected during three consecutive fall semesters, starting in 2003 and ending in 2005. During each fall semester, the present author taught three classes of marketing research, with each class serving 20 to 25 junior and senior undergraduates. Each of the classes in this study, regardless of the year involved, was subjected to the same marketing research project in which groups of two or three students were required to conduct a literature review of a survey topic, develop a questionnaire, collect survey data, analyze the data with computer software, and present the survey results in both a written and oral report. The project grade accounted for one third of each student’s course grade. Students were told that they could select the members of their group. Any students who had not found a group by the beginning of the second week of class were formed into groups of two or three students. During the first week of class, students were told to exchange contact information (i.e., telephone numbers, e-mail addresses, and home addresses) and were instructed on proper group member behavior. That is, they were told that they should respond promptly, courteously, and professionally to requests for information from their partners. Moreover, they were told that each group member should make a roughly equal contribution to the project. Students however were told that they did not necessarily have to divide each task equally among the group members. Rather, they were told they could take advantage of the strengths of individual members by having each task done by the member or members who were the best at doing it. But overall, the contribution among the group members should be roughly equal. At the beginning of each of the three fall semesters, all students were cautioned about social loafing, and the signs of a social loafer were discussed (e.g., poor work performance, weak class attendance, missed meetings, and failure to communicate). If any group found that it was dissatisfied with the performance of a member, the group was instructed to e-mail a warning letter to the person of concern, with a copy of the message going to the instructor. If after the warning the warned member did not correct his or her poor performance, the group was permitted to fire the weak performer. Any student fired from a group was required to complete the project by himself or herself. All students, regardless of the semester, were told that lengthy peer evaluations would be conducted at the end of the semester. They were informed that peer evaluations would be used to evaluate contributions to the project: Students who

Control and Treatment Groups

Students attending the marketing research classes during the fall semester of 2003 served as the control group. They received the conditions described in the previous section and were not required to maintain any form of a diary of their activities. Students who attended the marketing research classes during the fall semester of 2004 were subjected to all of the conditions of the control group. In addition, each group was required to maintain a team activity diary (TAD). Groups were told that the TAD should indicate dates that the group had meetings or discussed project activities, who attended the meetings or discussions, the task assignments that were made, and any problems that the group was experiencing. Students were given a one-page sample of a TAD in the syllabus, and they were told that their cumulative TAD would be collected at the end of the semester and would be graded on its completeness and descriptiveness. The TAD was worth 5 points, which accounted for about 4% of the project grade. On a weekly basis throughout the semester, groups were reminded to maintain their TAD and to make a backup copy of it in case computer problems occurred. The manner in which the TAD was maintained was left up to the groups: They could assign a “secretary” to maintain the TAD, or they could rotate the diary maintenance task among the members of the group. Feedback from those exposed to the TAD resulted in the development and subsequent implementation of the individual team activity diary (ITAD) in the fall semester of 2005. To maximize the chances that the ITAD would be successful, the instructor implemented the ITAD in a more dramatic fashion than the TAD. That is, the ITAD was worth 8% of the project grade (as compared to 4% for the TAD), it had to be implemented by every student (as opposed to only one group member for the TAD), it was designed to focus on partner problems (as opposed to group and project problems for the TAD), it was collected every 2 weeks during the course of the project and at the end of the semester (as opposed to only once at the end of the semester for the TAD), it was maintained confidentially by the instructor (whereas confidential procedures were not in effect with the TAD), and it was implemented in a more punitive and proactive fashion than the TAD. Students required to complete the ITAD were warned that any individual’s poor performance on completing the ITAD would definitely lower the individual’s grade on the project and could lower the group’s project grade. After each batch of ITADs was collected, the instructor immediately issued a typed warning notice to any student who did not turn in an ITAD or who

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

178

AUGUST 2007

turned in an ITAD that either used the wrong format or was incomplete. The notice summarized the student’s deficiency, indicated that the deficiency would definitely lower the student’s project grade and could lower the group’s project grade, and stated how to correct the deficiency. If any of the ITADs revealed at any time that a team member was loafing, the loafing student would be sent an e-mail from the professor that summarized the problem, indicated how to correct it, and mentioned that loafing behavior of a group member could result in that member being fired. At the same time that an e-mail was sent to the slacking student, another e-mail was sent to the “nonslacking” members of the group. This e-mail stated that the professor was aware of the problem student, that the group should continue to monitor the performance of the weak performer, and that corrective action (e.g., firing) should take place should the weak performer fail to correct the poor behavior. Students who were required to complete the ITAD were also subjected to all of the conditions of the control group.

complaint included comments such as “he or she was late for the meeting,” “he or she was slow to responding to my e-mails,” or “he or she was rude to me.” Medium complaints included comments such as “he or she procrastinates a lot” or “he or she did not do his or her required work properly.” High complaints included comments such as “he or she did not show up for a meeting,” “he or she does not respond to telephone calls or e-mails,” or “he or she did not do the required work.” In summary, each student provided an evaluation of his or her one or two partners via both the ITAD and the peer evaluations. To assess the convergent validity of the ITAD, two sets of correlations will be examined for each student’s rating of his or her one or two partners: the correlation between the number of complaints on the ITAD about a particular group member and peer evaluations for that group member and the correlation between the highest severity level of the complaints on the ITAD about a particular group member and the peer evaluations for that member.

Collecting Peer Evaluations

Measuring Students’ Attitudes Toward the TAD and ITAD

During the final week of each fall semester in this study, students were asked to complete a lengthy peer evaluation form. They were asked to use a 10-point scale (where 1 = not very descriptive and 10 = very descriptive) to rate their one or two partners on 45 descriptive phrases. Some of the phrases described good partner behavior or attributes (e.g., “was enthusiastic about working with me” or “was a great person to work with”), whereas others revealed just the opposite (e.g., “would not respond to my e-mails” or “was reluctant to work on the project”). The 45 descriptive phrases are displayed in Table 1. They were created by the author to reveal all conceivable aspects of a student’s impressions of his or her partner’s contributions to the project.

Students who completed either the TAD or ITAD were asked at the end of the semester to complete a questionnaire that inquired about their attitudes toward the diary method. They were asked whether the diary was a good or bad idea, their likes and dislikes of the diary method, whether their maintenance of the diary had any effect on either their or their partners’ contributions to the project, and how the diary assignment could be improved. In addition, they were asked to use a 5-point Likert scale to indicate their degree of agreement or disagreement with 27 belief statements concerning the diary method (see Table 3). PROPOSITIONS

Measuring Group Behavior

A measure of group behavior was obtained by noting the number of group member firings that occurred during each semester. Although it is possible that a firing could occur for reasons not related to social loafing (e.g., personality conflicts or a desire to work alone), it appeared to this instructor that all firings that occurred during the course of this study were due to social loafing. Establishing Convergent Validity for the ITAD

An attempt will be made to establish convergent validity for the ITAD by examining the correlation between peer evaluations and diary evaluations. The cumulative ITAD has an area that focuses on “teammate problems.” In those cases where partner problems were listed, the present author noted the following: (a) the number of times a student complained about either the first and if applicable the second partner and (b) the highest severity level of any mentioned complaints about the first and if applicable the second partner. Complaint severity was classified as either mild, medium, or high. A mild

The main purpose of both the TAD and ITAD was to alert students to the fact that their contribution to the group project was being watched carefully and that there could be negative consequences (e.g., being fired or receiving a lowered grade on the project) for any group member who did not contribute fairly to the project. Moreover, as stated earlier, the maintenance of a diary can have reactive effects, causing positive behavioral changes in the person who maintains the diary. For these reasons, it was assumed that both the TAD and ITAD would encourage positive partner behavior and inhibit social loafing. Consequently, the following propositions are formulated: Proposition 1: Students exposed to either the TAD or ITAD will be more likely to give positive peer evaluations and less likely to give negative peer evaluations than those in the control group. Proposition 2: Groups exposed to either the TAD or ITAD will be less likely to fire a group member than groups in the control group.

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

Because of the nature of the ITAD and its dramatic implementation, it was felt the ITAD would have a stronger impact on students’ diary attitudes, peer evaluations, and behavior than the TAD. Consequently, the following propositions are offered: Proposition 3: Students exposed to the ITAD and the manner in which it was implemented will be more likely to indicate that their behavior is being monitored than those exposed to the TAD. Proposition 4: Students exposed to the ITAD and the manner in which it was implemented will be more likely to give positive peer evaluations and less likely to give negative peer evaluations than those exposed to the TAD. Proposition 5: Groups exposed to the ITAD and the manner in which it was implemented will be less likely to fire a group member than those exposed to the TAD.

Convergent validity for the ITAD can be established if it is shown that ITAD evaluations are correlated with peer evaluations. It is assumed that the peer evaluations and the ITADs are measuring the same dimensions. Therefore, the following propositions are offered: Proposition 6: Peer evaluations that measure positive traits will be negatively correlated with the number and severity of complaints on the ITAD. Proposition 7: Peer evaluations that measure negative traits will be positively correlated with the number and severity of complaints on the ITAD.

PRELIMINARY DATA ANALYSIS Deriving Peer Scale Scores

A 45-item instrument was used to collect the peer evaluations. Each student gave either one or two sets of peer evaluations depending on the group’s size, resulting in a total of 342 complete evaluations for the three semesters in this study. When factor analysis with varimax rotation was applied to these data, five factors emerged that are summarized in Table 1. Three of the factors represent negative traits— Incompetent (Factor 1), Unavailable (Factor 3), and Bossy (Factor 4)—whereas two of the factors represent positive traits—Supportive (Factor 2) and Super Worker (Factor 5). The items comprising each of the five factors were used to create the five peer rating scales displayed in Table 2. Before a scale score was calculated, negatively loaded items were reverse-coded so that a high score on a negative item was equivalent to a low score on a positive item and vice versa. A total scale score was then derived by simply summing each scale’s item scores. Each total scale score was then divided by the number of items comprising the scale to arrive at a mean scale score. Because each item was collected with a 10-point scale, the resulting mean scale score could range anywhere from 1 to 10. Consequently, a low mean scale score revealed a lack of a feature, whereas a high

179

mean scale score meant that the person being evaluated possessed the feature. Because a student was a member of either a two-person or three-person group, some of the scale scores had to be adjusted to take into consideration group size. If a student was in a two-person group, the average scale scores were determined as described in the previous paragraph. However, if a student was in a three-person group, he or she gave two sets of scale scores, namely, one for each of his or her two partners. To get a single set of scale scores for students in a three-person group, their final scale scores were determined by averaging the scale scores they gave to each of their two partners. Averaging the scale scores of the two partners retains all the information that was available to assess a person’s attitude toward his or her two group members. If a person’s attitude toward the group were determined by analyzing the person’s attitude toward only the worst or best group member, the results would be biased toward an extreme group performer and would not reflect the person’s attitude toward the entire group.1 To assess the reliability of the five peer rating scales, Cronbach’s alpha was calculated for each of the scales and is displayed at the bottom of Table 1. The first three scales have an excellent reliability coefficient, ranging anywhere from .90 to .98. However, the latter two scales—Bossy and Super Worker—have a reliability coefficient that is .65 or less. Although the data for these scales will be analyzed, the reader should be forewarned that any conclusions based on these scales may be premature. Deriving Diary Rating Scale Scores

Those exposed to the TAD or ITAD were asked to use a 5-point Likert scale to indicate their degree of agreement or disagreement with 27 belief statements concerning the diary method. Some of these statements were used earlier in the diary study by Dommeyer and Lammers (2006), whereas others were developed specifically for this study. Factor analysis with varimax rotation was applied to these statements to determine the core dimensions of the data. The initial results revealed that one of the items was not correlated with any of the factors. Consequently, this item was deleted from the analysis, and the factor analysis was run again on the remaining 26 items. From this analysis, six factors emerged (see Table 3). The first factor was called Positive Feelings as it had high positive loadings on three items that reflect a positive attitude toward the diary method and high negative loadings on four items that show a general negative attitude toward the diary method. The second factor was called Revealing as it has high loadings on seven items that indicate that the diary would be effective at revealing each group member’s contribution to the project. An examination of the loadings on the remaining factors suggested that Factors 3 through 6 should be labeled Helpful, Watched, Uneasy, and Hassle, respectively.

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

180

AUGUST 2007 TABLE 1 FACTOR LOADINGS ON PEER RATING ITEMS AFTER VARIMAX ROTATION (N = 342)a

Item 1. Deserves a lower grade on the team project than I do. 2. Contributed less to the project than I did. 3. Always had to be told what to do. 4. Did not do much work on the project. 5. Deserves the same grade on the team project as I do. 6. Did poor quality work on the project. 7. Was a procrastinator, i.e., continually delayed doing project activities. 8. Had to be reminded to do work on the project. 9. Made me feel frustrated. 10. Is a person I would enjoy working with again on a team project. 11. Left the most difficult parts of the project for me to do. 12. Is a person I’d prefer not to work with again on a future team project. 13. Was an “equal partner” on the team project. 14. Completed assigned responsibilities effectively. 15. Was not dependable. 16. Was a great person to work with. 17. Had lots of excuses for why s/he could not work on the project. 18. Completed assigned responsibilities on time. 19. Was reluctant to work on the project. 20. Complained a lot about the project. 21. Had good ideas. 22. Was fun to work with. 23. Encouraged me to give my best effort on the team project. 24. Was easy to communicate with. 25. Seemed intelligent. 26. Worked effectively toward the goals of the team. 27. Readily accepted feedback from me. 28. Showed an interest in my accomplishments on the project. 29. Took pride in the work we did. 30. Helped me to understand how to do parts of the project. 31. Was sensitive to my feelings. 32. Was enthusiastic about working with me. 33. Was available when needed. 34. Would not return my telephone calls.

Factor 1: Incompetent

Factor 2: Supportive

Factor 3: Unavailable

Factor 4: Bossy

Factor 5: Super Worker

.825 .812 .769 .754 –.737b .721 .717

.697 .686 –.686b

.679 .678

–.654b –.647b .646 –.633b .608

–.557b .451 .433 .698 .676 .650 .647 .646 .628 .625 .620 .616 .581 .576 .555 .539 .699

)

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

181

TABLE 1 (continued) Factor 1: Incompetent

Item 35. Was not available for meetings. 36. Would not respond to my e-mails. 37. Always attended meetings. 38. Was never around when I needed his/her help. 39. Acted like a “boss” who continually demanded things from me. 40. Made me feel guilty about not doing enough work on the project. 41. Would not allow me to contribute a fair share to the project. 42. Made me feel uncomfortable during our meetings. 43. Deserves a higher grade on the team project than I do. 44. Did more than a fair share on the project. 45. Had project skills that I did not have. Eigenvalue after varimax rotation Percentage of variance explained Cronbach’s alpha

Factor 2: Supportive

Factor 3: Unavailable

Factor 4: Bossy

Factor 5: Super Worker

.691 .689 –.588b .558 .726 .698

.613 .581 .827 .627

12.20 27.12 .98

8.96 19.91 .95

4.79 10.63 .90

2.55 5.67 .65

.531 1.96 4.35 .54

a. Only the largest factor loading on each row is displayed. b. Item was reverse-coded when calculating Cronbach’s alpha and the total score on scale derived from this factor.

TABLE 2 COMPARING TREATMENT GROUPS ON MEAN SCORES ON PEER RATING SCALES

Peer Rating Scale Incompetent Supportive Unavailable Bossy Super Worker

Control (n ≥ 60)

Team Activity Diary (n ≥ 68)

Individual Team Activity Diary (n = 70)

Test Statistic

2.54a 8.59a 1.71a 1.40a 3.65

2.98a 8.28a 2.14a 1.66a,b 4.29

4.06b 6.81b 3.07b 2.00b 3.59

F(2, 197) = 8.46††† F(2, 196) = 16.1††† F(2, 197) = 9.19††† F(2, 196) = 4.67††† F(2, 196) = 2.55

NOTE: Means with different subscripts are significantly different from each other at the .05 alpha level according to the Scheffe Test. ††† p < .01, one-tailed.

Diary rating scale scores for the six factors were derived by using the same procedure that was used in the preceding section to derive peer rating scale scores. The only difference between the two procedures was that the peer rating scale scores are based on a 10-point scale and the diary rating scale scores are based on a 5-point scale. Mean scores on the diary rating scales are displayed in Table 4. Higher scores indicate more agreement and vice versa. Cronbach’s alpha was calculated for each of the diary rating scales. Displayed at the bottom of Table 3, the reliability coefficients reveal that the first three scales—Positive Feelings, Revealing, and Helpful—have moderate to good reliability, whereas the trailing three scales—Watched, Uneasy, and Hassle—have weak reliability. Although all the diary rating scale scores will be analyzed, the reader should

note that any conclusions based on the latter three scales may be premature. RESULTS Analysis of Propositions

Scores on the peer rating scales reveal that the ITAD had a dramatic effect on the peer evaluation scores—only the mean scores were not in the predicted direction. Proposition 1 and Proposition 4 collectively predicted that the best peer evaluation scores (i.e., high scores on positive factors and low scores on negative factors) would be caused by the ITAD, the second best peer evaluation scores would be produced by the TAD, and the worst scores would be produced by the control group. However, the results in Table 2 reveal

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

182

AUGUST 2007 TABLE 3 FACTOR LOADINGS ON DIARY RATING ITEMS AFTER VARIMAX ROTATION (N = 133)a,b Factor 1: Positive Feelings

Item Filling out the team activity diary was a complete waste of time. It took too much time to maintain the diary of the team’s activities. I wish all my instructors required project teams to keep a team activity diary. I enjoyed recording the team’s activities in the team activity diary. Future teams in this marketing research class should not be required to maintain a diary of the team’s activities. I looked forward to having our team’s activities recorded in the team activity diary. The team activity diary had no positive value to our team. The team activity diary should help my instructor determine the grade that each member of the team should receive. I always felt that the team activity diary would reveal that I was a solid contributor to the team project. The team activity diary should accurately reveal to my instructor the contribution that each team member made to the project. The team activity diary will help document the contribution that each team member made on the project. The team activity diary should help our instructor understand how the tasks of our project were divided among the team members. The team activity diary motivated me to make a fair contribution to the team project. Because of the team activity diary, I was very aware of my personal contributions to the team project. The team activity diary helped keep our team on a timely schedule. The team activity diary helped each team member understand his or her responsibilities on the project. The team activity diary helped to prevent team members from slacking off on their duties. The team activity diary did a good job of showing the progress the team was making on the project. Because of the team activity diary, I felt like my teammate(s) was (were) closely watching my contributions to the team project. Without the team activity diary, my teammate(s) and I might not have finished the project on time. Because of the team activity diary method, I felt like my professor was closely watching my contributions to the team project. I feared that negative information about me might be placed in the team activity diary. Having to complete the team activity diary made me feel uncomfortable.

Factor 2: Revealing

Factor 3: Helpful

Factor 4: Watched

Factor 5: Uneasy

Factor 6: Hassle

–.752c –.744c .682 .679 –.629c

.619 –.595c .746

.736

.720

.684

.664

.543 .473

.834 .744

.712 .360

.773

.711

.643

.536 .678

)

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

183

TABLE 3 (continued) Factor 1: Positive Feelings

Item Some of the items recorded in the team activity diary were not true. Sometimes my teammate(s) or I forgot to record our team’s activities in the diary. It was a hassle to keep the team activity diary up to date. Eigenvalue after varimax rotation Percentage of variance explained Cronbach’s alpha

Factor 2: Revealing

Factor 3: Helpful

Factor 4: Watched

Factor 5: Uneasy

Factor 6: Hassle

.649 .822 .589 4.35 16.72 .87

3.82 14.70 .86

2.63 10.13 .77

2.39 9.20 .68

1.82 6.99 .39

1.47 5.64 .58

a. Analysis is based on data collected from those exposed to the diary method, whether it be the team activity diary or the individual team activity diary. b. Only the largest factor loading on each row is displayed. c. Item was reverse-coded when calculating Cronbach’s alpha and total score on the scale derived from this factor.

TABLE 4 COMPARING DIARY GROUPS ON MEAN SCORES ON DIARY RATING SCALESa

Diary Rating Scale Positive Feelings Revealing Helpful Watched Uneasy Hassle

Team Activity Diary (n ≥ 64)

Individual Team Activity Diary (n ≥ 69)

Test Statistic

2.58 2.85 2.63 1.96 2.12 3.60

2.77 3.52 2.87 2.47 2.38 3.49

t(132) = 1.11 t(133) = 4.40**** t(132) = 1.51 t(132) = 3.67**** t(132) = 1.43 t(133) = –0.62

a. Data were collected with 5-point Likert Scales, with 1 = strongly disagree and 5 = strongly agree. ****p < .001, two-tailed.

that there are no significant differences in peer rating scores between those in the control group and those exposed to the TAD. Moreover, in terms of the first three peer rating scales in Table 2, those exposed to the ITAD produced worse peer evaluation ratings than those in the other two groups. That is, those exposed to the ITAD evaluated their partners as being more incompetent, less supportive, and more unavailable than those in the other two groups. Furthermore, those exposed to the ITAD were more likely to rate their partners as bossy than those in the control group. These results for the most part are contradictory to what was predicted and show no support for either Proposition 1 or Proposition 4. Proposition 3 predicted that those exposed to the ITAD would be more likely to feel that their behavior was being monitored than those exposed to the TAD. Two diary rating scales relate directly to this proposition, namely, the Revealing Scale and the Watched Scale. High scores on the Revealing Scale indicate that a person believes that his or her individual contribution will be detected from use of the diary. Similarly, high scores on the Watched Scale indicate that a person believes that his or her behavior in the group is being monitored, whether it be by the diary keeper, his or her partners, or the professor. Those

exposed to the ITAD scored significantly higher on both the Revealing Scale and the Watched Scale than those exposed to the TAD (see Table 4). These results support Proposition 3. Proposition 2 and Proposition 5 collectively predicted that group member firings would occur most often with the control group, less often among those exposed to the TAD, and least often among those exposed to the ITAD. Table 5 displays the percentage of groups in the control and treatment conditions firing at least one group member. From these data, it is clear that groups exposed to the ITAD were far more likely to experience firings (31%) than those exposed to either the control condition (7.7%) or the TAD (4%). These results tend to contradict what was predicted and provide no support for either Proposition 2 or Proposition 5. The remaining two propositions—Proposition 6 and Proposition 7—predicted that ITAD evaluations would be correlated with the peer evaluations. The two peer rating scales measuring positive traits are the Supportive Scale and the Super Worker Scale. As shown in Table 6, these scales have significant negative correlations with the number and severity of complaints on the ITAD. These results support Proposition 6.

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

184

AUGUST 2007 TABLE 5 PERCENTAGE OF TEAMS FIRING AT LEAST ONE TEAMMATE

Teams firing one or more teammates

Control (n = 26)

Team Activity Diary (n = 25)

Individual Team Activity Diary (n = 29)

Fisher’s Exact Test Value

7.7

4.0

31.0

8.20**

**p < .05, two-tailed.

TABLE 6 CORRELATIONS BETWEEN NUMBER AND SEVERITY OF COMPLAINTS ON THE INDIVIDUAL TEAM ACTIVITY DIARY (ITAD) AND SCORES ON THE PEER RATING SCALES

Peer rating scale scores for Teammate 1 Number of complaints on ITAD about Teammate 1a Severity of complaints on ITAD about Teammate 1b Peer rating scale scores for Teammate 2 Number of complaints on ITAD about Teammate 2a Severity of complaints on ITAD about Teammate 2b

Incompetent

Supportive

Unavailable

Bossy

Super Worker

.74††† (n = 61) .62††† (n = 21)

–.75††† (n = 61) –.63††† (n = 21)

.65††† (n = 61) .45†† (n = 21)

.15 (n = 61) .19 (n = 21)

–.43††† (n = 61) –.57††† (n = 21)

.64††† (n = 44) .70††† (n = 19)

–.61††† (n = 44) –.68††† (n =19)

.61††† (n = 44) .76††† (n =19)

.13 (n = 44) .20 (n =19)

–.26†† (n = 44) –.33† (n =19)

a. Pearson correlation coefficients are displayed. b. Spearman correlation coefficients (rs) are displayed. † p < .10, one-tailed. ††p < .05, one-tailed. †††p < .01, one-tailed.

Three peer rating scales measure negative traits. They are the Incompetent Scale, the Unavailable Scale, and the Bossy Scale. With the exception of the Bossy Scale, these scales have significant positive correlations with the number and severity of complaints on the ITAD (see Table 6). These results for the most part support Proposition 7. Attitudes Toward the Diary Method

Both groups exposed to the diary method—whether it be the TAD or ITAD—were asked to complete a diary survey at the end of the semester. The results indicate that the ITAD respondents were more likely than the TAD respondents to check off on a structured question that it was a “good idea” for the instructor to require a diary detailing the group’s activities (66% vs. 45%), χ2(2) = 6.17, p < .05. Yet, a very low percentage of each diary group—approximately 10%— felt that the diary requirement increased their contribution to the project. Similarly, only 6% of each diary group felt that the diary increased the contribution of one or more of their group’s members. The data in Table 4 show that the ITAD respondents gave higher mean scores than the TAD respondents on the Revealing and Watched scales. These results were expected (see Proposition 3). However, both diary groups showed a lack of enthusiasm for the diary method as they both scored in the “neutral zone” on the Positive Feelings Scale and on the Helpful Scale. The remaining scale scores indicate that

although both diary groups felt it was a “hassle” to complete the diaries, neither group felt “uneasy” about the diaries. When those exposed to the TAD or ITAD were asked what they liked about the diary method, common responses from both treatment groups were that they thought the diary method was good at identifying group member contributions and at summarizing group progress (see Table 7). TAD respondents however were more likely than ITAD respondents to mention that the diary kept their group on schedule (30.2% vs. 8.2%). As for negative comments on the diary method, the most common response from both diary groups was that maintaining the diary was too time-consuming (see Table 8). TAD respondents were inclined to state that they did not understand the purpose of the diary (30.4%) and that they often forgot to maintain the diary (17.4%), whereas ITAD respondents were inclined to object to having to turn in the diary so often (11.4%). Some of the ITAD respondents indicated that the diary made them feel paranoid (6.8%). When students were asked to offer suggestions for making the diary assignment more acceptable, TAD respondents were most likely to offer the following comments: (a) Turn the diary in more often (30.2%), (b) require a diary from each student (25.6%), (c) give more guidelines on diary content (14.0%), (d) give the diary more weight on the project grade (11.6%), (e) make the project grade independent of the diary performance (7.0%), and (f) make the purpose of the

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

185

TABLE 7 POSITIVE COMMENTS ABOUT THE TEAM ACTIVITY DIARY (%)a Positive Comment

Team Activity Diary (n = 43)

Individual Team Activity Diary (n = 49)

25.6 30.2 30.2 2.3 0.0 0.0 11.6

40.8 24.5 8.2 14.3 6.1 4.1 2.0

Indicated who contributed and who didn’t Summarizes how the team is progressing It kept us on schedule Made clear who was responsible for each area Easy to do Makes people more aware that they are being watched Miscellaneous

a. Based on the question: “What, if anything, did you like about the team activity diary?”

TABLE 8 NEGATIVE COMMENTS ABOUT THE TEAM ACTIVITY DIARY (%)a Negative Comment

Team Activity Diary (n = 46)

Individual Team Activity Diary (n = 44)

39.1 30.4 17.4 4.3 0.0 0.0 0.0

50.0 2.3 6.8 2.3 11.4 6.8 4.5

0.0

4.5

0.0 8.7

4.5 6.8

Too time-consuming Didn’t see the need or purpose for it Forgot to record the team’s activities Information provided may not be accurate Didn’t like having to turn the diary in so often Made me feel paranoid Didn’t like team grade affected by other teammate’s performance on the diary Didn’t like having to submit a cumulative diary at the end of the term Didn’t like the open-ended format of the diary Miscellaneous

a. Based on the question: “What, if anything, did you dislike about the team activity diary?”

diary clear (4.7%) (see Table 9). Two of these suggestions— Items d and e—are not consistent with each other, making it difficult to alter the TAD in such a way that will please everyone. Nonetheless, when the requirements for the ITAD were formulated, most of the aforementioned suggestions were taken into consideration. As a result, ITAD respondents, except for saying they wanted more weight on the diary grade, were not likely to make any of the previously mentioned suggestions. Instead, the ITAD respondents offered suggestions that would make their task of maintaining the diary easier. For example, they would like to turn the diary in less often (19.5%), have a diary that allows for a structured response (9.8%), have a team diary (7.3%), forgo the cumulative diary at the end of the semester (4.9%), and maintain the diary on the Internet (4.9%). Some of the ITAD respondents showed a desire to make the diary public to all members of the group (7.3%). DISCUSSION AND CONCLUSIONS The TAD had numerous shortcomings that made it an ineffective treatment. First, the students did not understand its purpose and had little motivation, except for a 4% project grade incentive, to maintain an accurate and descriptive

diary. Second, each student was not involved in the maintenance of the TAD. Often the diary was maintained by only one person in the group and the majority of students had no involvement with the diary. In some cases, it appeared that the diary keeper was a social loafer, causing much of the diary content to be biased in favor of the loafer. Third, the focus of the TAD was on group problems rather than on partner problems. Consequently, the TAD was not particularly adroit at revealing fault with a social loafer. Fourth, because the TAD was collected only as a cumulative diary at the end of the semester, the students who maintained the diaries were not always monitoring and recording activities in a timely fashion. In some cases, it was obvious to this instructor that most of the diary’s contents were created just before they were turned in at the end of the semester. Finally, because the TAD was submitted to the professor only at the end of the semester, the professor could not address partner problems in a proactive manner. The ITAD was developed to correct many of the shortcomings of the TAD. When the ITAD was introduced to the students, its purpose was clearly described. Students were given a sample of an ITAD and shown how their ITADs should focus on partner problems rather than on group problems. With the ITAD, the diary requirement was an

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

186

AUGUST 2007 TABLE 9 STUDENT SUGGESTIONS FOR THE TEAM ACTIVITY DIARY (%)a

Student Suggestion Diary should be turned in to the instructor more often The diary should be given more weight in the team project grade Individual diaries should be required More guidelines are needed on diary content Need to know the purpose of the diary The diary should have no effect on the team’s grade Turn the diary in to the professor less often Would prefer a structured format to the diary Team diaries should be required Diaries should not be confidential Should not have to turn in a cumulative diary at the end of the term Diary should be maintained on the Internet Miscellaneous

Team Activity Diary (n = 43)

Individual Team Activity Diary (n = 41)

30.2 11.6 25.6 14.0 4.7 7.0 0.0 0.0 0.0 0.0 0.0 0.0 7.0

9.8 19.5 0.0 0.0 0.0 0.0 19.5 9.8 7.3 7.3 4.9 4.9 17.1

a. Based on the question: “Please complete the following sentence. I think the team activity diary would be an acceptable assignment if . . . .”

individual requirement, thereby getting everyone involved in the diary maintenance process. Students were given a variety of motivations to maintain the diary. They were told the diary would be graded and that it would account for approximately 8% of their project grade. Moreover, they were told that any failures to submit or properly maintain the diary would lower individual project grades and could lower group grades. Once the project began, ITADs were collected every 2 weeks. The periodic collection of the diaries forced students to think about their partners often and allowed the professor time to deal with social loafing problems. Both the TAD and ITAD were created to prevent social loafing. Both treatments however failed in that mission. The ITAD nonetheless did have a dramatic effect on the results— only the results were not in the predicted direction. After the ITAD was implemented, peer evaluations were at their lowest level and firings were at their highest level. One interpretation of these results is that the ITAD caused students to misbehave on the project, resulting in the low peer evaluations and a high number of firings. Another interpretation—one that this author believes is more tenable—is that the ITAD had no effect on the students’ tendency to loaf on the group project. In other words, this author believes that the tendency for students to loaf on the project was the same among the control group and two treatment groups. And although neither the TAD nor ITAD was able to prevent social loafing, the ITAD dramatically affected the majority of the class, namely, the nonloafers, in a way that was unforeseen. That is, the ITAD and the proactive manner in which it was implemented not only made the nonloafers extremely alert to problems caused by their loafing partners, but it also made the nonloafers intolerant of errant behavior. For this reason, the ITAD resulted in poorer peer evaluations and more firings than either the control group or the group exposed to the TAD. It appears the real value of the ITAD and the manner in which it was implemented is that it causes the nonloafers to be constantly aware of the follies of their errant

partners. This vigilance makes the nonloafers intolerant of any social loafer’s shortcomings and empowers the nonloafer to effectively deal with the problem This study showed that the ITAD evaluations and the peer evaluations are related, thereby providing convergent validity for the ITAD. Instructors who use the ITAD should realize that it can provide a rich description of group member activity because students are free to write whatever they want. If structured peer evaluations are collected along with the ITADs, the instructor will have an even more complete picture of group member behavior. Moreover, the ITADs and structured peer evaluations can be compared for their consistency. If a professor notices that an evaluator has been inconsistent between the two sets of evaluations, the professor can ask the evaluator to explain the discrepancies. Students had mixed feelings toward the ITAD, and it undoubtedly could be fine-tuned to make it more appealing and effective. Rather than have the students turn in paper copies of their ITADs, the instructor could have them submit their periodic recordings by e-mail or via a blog. Also, the format of the diary could be changed to make it more efficient for each student to focus on partner problems. For example, students could be asked to rank order their partners on contributions and then to explain the basis for their lowest ranking. Whether diaries should be confidential or not is another area of concern. Confidential diaries are more likely than public ones to elicit honest information from students, but public diaries would allow students to learn from the insights of their colleagues. Another area that might be investigated is how to effectively deal with students who are tagged as social loafers. In this study, social loafers identified by the ITADs were e-mailed a warning notice that explained their deficiency and suggested corrective action. It might be more effective however if the instructor had a personal meeting with each social loafer to emphasize the seriousness of the situation and to get a better idea of the

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

JOURNAL OF MARKETING EDUCATION

loafer’s problem and of how to handle it. Araki, Hashimoto, Kono, Matsuki, and Yano (2006) found that counseling is much more effective when done face to face than by e-mail. Finally, future researchers might consider how the diary method affects large groups. This study was restricted to groups of two or three students. Although the TAD is a group diary and the ITAD is an individual diary, the reader should note that these treatments differ by more than just the “group” versus “individual” aspect. The treatments were implemented with differences in grade weights, diary formats, frequency of submission requirements, confidentiality procedures, and in the manner in which social loafers were handled. Any one difference or combination of differences could account for the observed results. This study was also limited by small sample sizes and by some attitude scales that had weak reliability coefficients. Moreover, the validity of the scales used in this study can be questioned because they have little more than face validity. Finally, this study was conducted under the assumption that the quality of the students was equivalent among the three semesters under investigation. Future researchers could hopefully improve on these limitations. NOTE 1. An alternative analysis was conducted in which each group member’s impressions of his or her partners were measured by using the group member’s impressions of only the worst group performer, if appropriate. This alternative analysis sometimes affected the peer rating scale scores of persons who had three persons in their group. When the alternative analysis was used to compare the various treatment and control groups on their mean scores on the peer rating scales in Table 2, the results of the statistical tests did not change.

REFERENCES Abernethy, A. M., & Lett, W. L., III. (2005). You are fired! A method to control and sanction free riding in group assignments. Marketing Education Review, 15(1), 47-54. Albanese, R., & Van Fleet, D. (1985). Rational behavior in groups: The free-riding tendency. Academy of Management Review, 10, 244-255. Araki, I., Hashimoto, H., Kono, K., Matsuki, H., & Yano, E. (2006). Controlled trial of worksite health education through face-to-face counseling vs. e-mail on drinking behavior modification. Journal of Occupational Health, 48, 239-245. Batra, M. M., Walvoord, B. E., & Krishnan, K. S. (1997). Effective pedagogy for student-team projects. Journal of Marketing Education, 19(2), 26-42. Beatty, J., Haas, R., & Sciglimpaglia, D. (1996). Using peer evaluations to assess individual performances in group class projects. Journal of Marketing Education, 18(2), 17-27. Belanger, K., & Greer, J. (1992). Beyond the group project: A blueprint for a collaborative writing course. Journal of Business and Technical Communication, 6, 99-115. Broden, M., Hall, R., & Mitts, B. (1971). The effect of self-recording on the classroom behavior of two eighth-grade students. Journal of Applied Behavior Analysis, 4, 191-199. Brooks, C. M., & Ammons, J. L. (2003). Free riding in group projects and the effects of timing, frequency, and specificity of criteria in peer assessments. Journal of Education for Business, 78, 268-272.

187

Chen, Y., & Lou, H. (2004). Students’ perceptions of peer evaluation: An expectancy perspective. Journal of Education for Business, 79, 275-282. DiGangi, S. A., Maag, J. W., & Rutherford, R. B., Jr. (1991). Self-graphing of on-task behavior: Enhancing the reactive effects of self-monitoring on on-task behavior and academic performance. Learning Disability Quarterly, 14, 221-230. Dommeyer, C. J. (1986). A comparison of the individual proposal and the team project in the marketing research class. Journal of Marketing Education, 8(1), 30-38. Dommeyer, C. J., & Lammers, H. B. (2006). Students’ attitudes toward a new method for preventing loafing on the group project: The team activity diary. Journal of College Teaching & Learning, 3, 15-22. Forsyth, L. (1998). A prospective self-monitoring investigation of naturally occurring reinforcement of exercise behavior. Dissertation Abstracts International, 58, 3921. Freeman, K., & Dexter-Mazza, E. (2004). Using self-monitoring with an adolescent with disruptive classroom behavior: Preliminary analysis of the role of adult feedback. Behavior Modification, 28, 402-419. Goretsky, M. E. (1984). Class projects as a form of instruction. Journal of Marketing Education, 6(3), 33-37. Grieb, T., & Pharr, S. W. (2001). Managing free-rider behavior in teams. Journal of the Academy of Business Education, 2, 37-47. Haas, A., Haas, R., & Wotruba, T. (1998). The use of self-ratings and peer ratings to evaluate performances of student group members. Journal of Marketing Education, 20, 200-209. Henke, J. W., Jr. (1985). Bringing reality to the introductory marketing student. Journal of Marketing Education, 7(3), 59-71. Ingham, A., Levinger, J., & Peckham, V. (1974). The Ringelmann effect: Studies of group size and group performance. Journal of Experimental Social Psychology, 10, 371-384. Kanfer, F. (1970). Self-monitoring: Methodological limitations and clinical applications. Journal of Consulting and Clinical Applications, 35, 148-152. Kerr, N. (1983). Motivation losses in small groups: A social dilemma analysis. Journal of Personality and Social Psychology, 45, 819-828. Latane, B., Williams, K., & Harkins, S. (1979). Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology, 37, 822-832. McCorkle, D. E., Reardon, J., Alexander, J. F., Kling, N. D., Harris, R. C., & Iyer, R. V. (1999). Undergraduate marketing students, group projects, and teamwork: The good, the bad, and the ugly? Journal of Marketing Education, 21, 106-117. McLaughlin, M., & Fennick, R. (1987). Collaborative writing: Student interaction and response. Teaching English in the Two-Year College, 14, 214-216. Mello, J. (1993). Improving individual member accountability in small work group settings. Journal of Management Education, 17, 253-259. Myers, D. (1990). Social psychology (3rd ed.). New York: McGraw-Hill. North, A. C., Linley, P. A., & Hargreaves, D. J. (2000). Social loafing in a co-operative classroom task. Educational Psychology, 20, 389-392. Richards, S., McReynolds, W., Holt, S., & Sexton, T. (1976). Effects of information feedback and self-administered consequences on selfmonitoring study behavior. Journal of Counseling Psychology, 23, 316-321. Robbins, T. (1995). Social loafing on cognitive tasks: An examination of the “sucker” effect. Journal of Business and Psychology, 9, 337-342. Shepperd, J. (1993). Productivity loss in performance groups: A motivation analysis. Psychological Bulletin, 113, 67-81. Shepperd, J., & Wright, R. (1989). Individual contributions to a collective effort: An incentive analysis. Personality and Social Psychology Bulletin, 15, 141-149. Snider, V. (1987). Use of self-monitoring of attention with LD students: Research and application. Learning Disability Quarterly, 10, 139-151. Strong, J., & Anderson, R. (1990). Free-riding in group projects: Control mechanisms and preliminary data. Journal of Marketing Education, 12(2), 61-67.

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012

188

AUGUST 2007

Todd, A., Horner, R., & Sugai, G. (1999). Self-monitoring and selfrecruited praise: Effects on problem behavior, academic engagement, and work completion in a typical classroom. Journal of Positive Behavior Interventions, 1(2), 66-76. Williams, D. L., Beard, J. D., & Rymer, J. (1991). Team projects: Achieving their full potential. Journal of Marketing Education, 13(2), 45-53.

Williams, K., & Karau, S. (1991). Social loafing and social compensation: The effects of expectations of co-worker performance. Journal of Personality and Social Psychology, 61, 570-581.

Downloaded from jmd.sagepub.com at Marketing Educators' Association on August 28, 2012