Effects of Convergence Techniques on Satisfaction, Perceived ...

2 downloads 0 Views 198KB Size Report
Brainstorming Is Just the Beginning: Effects of Convergence Techniques on Satisfaction, Perceived. Usefulness of Moderation, and Shared Understanding in ...
2015 48th Hawaii International Conference on System Sciences

Brainstorming Is Just the Beginning: Effects of Convergence Techniques on Satisfaction, Perceived Usefulness of Moderation, and Shared Understanding in Teams Isabella Seeber1, Ronald Maier1, Barbara Weber1, Gert-Jan de Vreede2, Triparna de Vreede2, and Abdulraham Alothaim2 1 2 University of Innsbruck University of Nebraska at Omaha [email protected] {gdevreede, tdevreede, aalothaim}@unomaha.edu select high-quality ideas by evaluating ideas after brainstorming [e.g., 4]. But, how can ideas be evaluated if the team does not have a shared understanding of the idea? Therefore, idea generation is often followed by convergence, which helps teams to clarify a reduced set of ideas that they consider worthy of further attention [5]. In other words, the goal of convergence is to have a smaller set of clarified ideas on which the team has a shared understanding [6]. Convergence can subsequently be followed by a focused idea evaluation and selection during which specific ideas are singled out based on their relevance for solving a problem and to be acted upon [7, 8]. While significant research has been conducted on the generation of ideas in manual and computersupported teams [9], only recently have researchers begun to focus on convergence in teams (e.g., [5]). Past field research suggests that guiding a team through convergence requires more facilitation skills than any other pattern of collaboration [10]. Different convergence techniques have been proposed to guide the facilitation process [11]. On the one hand, techniques differ in their level of procedural structuring, which provides teams with techniques supporting the management of their collaboration process, e.g., ground rules and routines [12]. On the other hand, techniques differ in their level of discussion allowance, which drives free team discussions to build shared understanding [13]. Yet, it remains unclear what levels of procedural structuring and discussion allowances lead to effective collaboration. Too little provision of procedural structuring might not provide enough support for managing teamwork processes resulting in coordination problems [14] and the team losing sight of their convergence goal. In turn, too much procedural structuring might restrict the necessary flexibility to react to dynamics in team processes [15]. Similarly, the extent of discussion allowances influences how teams can engage in (co-)construction of knowledge and

Abstract Much research exists regarding (computersupported) idea generation, yet little is known about how teams can be facilitated to efficiently converge on a set of ideas. Convergence is a critical activity in collaborative problem-solving and decision-making as teams need to focus their cognitive resources on the most promising ideas resulting from a brainstorming activity. This paper investigates the effects of facilitation on team outcomes to foster our understanding of convergence. We tested three facilitated convergence techniques, SelfSifter, FastFocus, and TreasureHunt that offer different levels of procedural structuring and discussion allowances. An experiment with 138 subjects in 34 teams showed significant improvements in satisfaction with process and product after convergence. SelfSifter and TreasureHunt outperformed FastFocus in terms of shared understanding, while FastFocus and TreasureHunt outperformed SelfSifter in terms of usefulness of moderation. Our findings contribute to a better understanding of how teams can benefit from facilitated convergence techniques. 1.

Introduction

Organizations rely on teams in the hope that the collaboration among team members improves problemsolving and decision-making [1]. Information technologies (IT) support collaboration in general and the swift and effortless collection of ideas in particular [2] with techniques, such as electronic brainstorming or discussion boards. The often substantial set of generated ideas presents teams with the challenge of information overload leading to decreased performance [3]. The generation of a large number of brainstorming ideas does not offer any guarantees by itself that an optimal or even suitable solution is ultimately implemented. Related research recognizes this challenge and searches for ways how to effectively 1530-1605/15 $31.00 © 2015 IEEE DOI 10.1109/HICSS.2015.76

581

constructive conflict to establish consensual mutual meaning. This leads to shared understanding and improved team performance [16]. To the best of our knowledge, the effects of procedural structuring and discussion allowances on the outcomes of convergence have not yet been studied. This, however, is deemed important as effective team processes improve team performance. In this paper, we address this research gap by studying the effects of procedural structuring and discussion allowances using the three facilitated convergence techniques (FastFocus, SelfSifter, and TreasureHunt) on the satisfaction of teams with process and outcomes, the team’s perceived usefulness of moderation, and shared understanding. These techniques differ with respect to the extent of procedural structuring and the extent of discussion they allow and were tested in a laboratory experiment involving 138 subjects in 34 teams. 2.

To structure this challenge, researchers have proposed the Six-Layer Model of Collaboration (SLMC) [23]. According to the SLMC, facilitators can separate design concerns for a complex collaboration session into the following distinct layers, where each layer serves to set the context and scope for the next one(s): Layer 1 – Collaboration Goals, Layer 2 – Group Products/Deliverables, Layer 3 – Group Activities, Layer 4 – Group Procedures, Layer 5 – Collaboration Tools, and Layer 6 – Collaborative Behaviors [23]. In this paper we focus specifically on layers 4, 5, and 6. In the fourth layer, researchers differentiate between five patterns of collaboration that teams go through when they work together towards a joint goal: generation (aka brainstorming, divergence, or ideation), convergence (reduce and clarify), organization, evaluation, and building commitment [24, 25]. Furthermore, on this layer the conceptual template of a facilitation technique is included. Using this template as a codification scheme for best facilitation practices, researchers have developed a library of facilitation techniques, representing a pattern language for collaboration process design. This pattern language is called thinkLets. Each thinkLet specifies in terms of the tools to be used (Layer 5) and the script to follow (Layer 6) how a facilitator can guide a group to create a specific pattern of collaboration. In other words, each thinkLet provides explicit and specific instructions to the team for working together in such a way that they move through one of the five patterns of collaboration [24].

Background

This section introduces the main concepts that this paper builds upon. 2.1. Facilitation in Computer-Supported Environments Human facilitation is usually differentiated into process, content, and technical facilitation. Process facilitation provides support how to structure the collaboration process [17]. Content facilitation offers insights, interpretations, or opinions about the task [18]. Technical facilitation helps teams to use available technology that fits their task [19]. When focusing on process facilitation, a facilitator can state well-timed clarifications, provide summaries, ask genuine questions about observed incongruities, or suggest ways how to move forward [20]. A facilitator can also help the group to take responsibility for their meeting outcomes, keep discussions on topic, rephrase and make explicit the team’s ideas, test for agreement among participants, identify decisions, design and flexibly run meeting activities, build positive relationships among participants, strive for equal participation, and manage conflict and negative emotions [21]. Facilitators pursue procedural improvements by refocusing the participants on the team goals throughout their collaboration and consequently avoid or minimize non-task behavior. This includes to break down complex processes into sub-activities, e.g., brainstorming or voting [22]. Thus, facilitators are faced with a complex challenge when they design and moderate a productive team process.

2.2. Convergence ThinkLets The convergence pattern of collaboration is preceded by a generation pattern in which a team ideates. The input for convergence is a collection of generated ideas that needs to be reduced and clarified to prepare for evaluation and selection [24]. Convergence consists of sub-activities comprising, filtering, abstracting, synthesizing, and building shared understanding. Filtering concerns team members selecting a subset of all generated ideas. Abstracting and synthesizing concerns team members processing ideas by combining, generalizing or detailing concepts. Shared understanding concerns the team members’ communication processes in which they clarify their ideas to develop a shared understanding [26]. Here, we focus on the sub-activity “building shared understanding” and argue that facilitation techniques predominantly differ in their extent of procedural structuring and discussion allowances. Procedural structuring describes the configuration of a team’s decision process by influencing one or more procedural dimensions. These dimensions comprise the 582

sequencing of activities, the pace of communication, the content of communication messages, the communication mode, the vigilance of engagement in the activity, and the selection of process support structures [27]. Discussion allowance describes what kind of communication is permitted within the team. A team’s success greatly depends upon their way of communication [28]. They develop collective knowledge by passing on relevant information in an appropriate manner, explaining and interpreting exchanged information, developing proposals to solve problems, clarifying proposals, arguing against or for proposals, as well as planning and regulating their team process [29]. In this study we focus on three distinct convergence thinkLets: SelfSifter (SS), FastFocus (FF), and TreasureHunt (TH). For each of these three convergence techniques, the facilitator clearly presents the goal and the expected product at the beginning of the convergence activity. In the following, we discuss how thinkLets vary in the way how they support procedural structuring and discussion allowances. SelfSifter is characterized by low procedural structuring and high discussion allowances. The role of the facilitator is limited to presenting the goals and the expected outcome of the activity. No further procedural structuring takes place. SelfSifter is characterized by unrestricted team discussions to develop a reduced list of clarified ideas. The facilitator does not provide any prompts while the team discusses. The facilitator solely monitors discussion among the team members and gathers the final output, explicated by one of the team members, on a public space that everyone can see. Therefore, the team members themselves need to manage how they will go about converging ideas. FastFocus [11], in turn, is characterized by high procedural structuring and low discussion allowance. First, the facilitator assigns each team member a subset of ideas created in the generation activity. Then, the facilitator calls on each team member in turn to select what (s)he believes to be an idea that is worthy of further attention from the subset of ideas that (s)he has access to. The facilitator then allows team members to question and improve idea clarity (but not judge the ideas) to build shared understanding. The facilitator further ensures task-fit, avoids redundancies, and may generalize or specify the idea to an appropriate level of abstraction by prompting the content of discussions. Finally, each time an idea is selected and is clear for the whole team, the facilitator gathers it on a public space that everyone can see. Finally, TreasureHunt is characterized by high procedural structuring and high discussion allowance. First, the facilitator creates sub-teams. The sub-teams

are asked to select two ideas from the assigned list of generated ideas. The facilitator explicitly asks the subteams to discuss the ideas in pairs. This activity strives to build a shared understanding within their sub-teams. Then, the facilitator calls on each sub-team in turn to share the idea that should be further considered and added to the public space. Then the facilitator prompts the sub-team and other team members in the same way as with the FastFocus thinkLet resulting in the collection of clarified ideas on a public space that everyone can see. Table 1 summarizes the differing levels of procedural structuring and discussion allowances per thinkLet. Table 1: Comparison of convergence thinkLet characteristics

SelfSifter FastFocus TreasureHunt

3.

Procedural Structuring Low High High

Discussion Allowance High Low High

Hypotheses Development

The purpose of a convergence activity is to reduce and clarify a list of generated ideas. Teams typically are confronted with having generated many more ideas during a brainstorming activity than they can meaningfully consider in detail. In order to achieve the goal of their efforts (e.g. solve a problem or make a decision), they need to focus their attention on a limited number of these ideas that they find most promising. We argue that the extent to which they will be able to do this will influence their levels of satisfaction regarding the team process and outcomes. Satisfaction is a function of perceived goal attainment [30, 31]. Therefore, team members’ satisfaction level will depend on their perception of the extent to which the results of their work contribute to the team goal and to their private goals. From this perspective, it can be argued that a well-executed convergence activity where team members can determine which ideas will be considered in more detail, will positively influence team members’ satisfaction perceptions. Team members will have a better and more focused overview of the key ideas that will contribute to their goals compared to the results of a brainstorm activity. This will also increase their affect with the process that resulted in these outcomes [30]. Therefore, we propose: Proposition 1: Perceived satisfaction with process is a function of perceived goal attainment. ThinkLets describe best-practices that, when faithfully executed, allow predictable and repeatable type of 583

results and group dynamics [11]. By establishing procedural structures, the facilitator creates productive meeting processes, which team members should perceive to be more satisfying [32]. Based on this proposition we define the following hypotheses: H1: ThinkLet-facilitated teams will report higher satisfaction with process after convergence than after generation. Whereas satisfaction with the process is important for the on-going team collaboration, satisfaction with the product is necessary for the successful implementation of the decision made [32]. We propose: Proposition 2: Perceived satisfaction with product is a function of perceived goal attainment. By faithfully adopting thinkLets, facilitation inspires team members and reinforces targeted, task-relevant discussions to assist teams in developing their solutions. Hence, team members should be more satisfied with their solutions [32]. Based on this proposition we define the following hypotheses: H2: ThinkLet-facilitated teams will report higher satisfaction with product after convergence than after generation. Convergence activities are considered more challenging for team members than generation activities [3, 10, 33]. After brainstorming, teams often face problems of information overload because of the high number of ideas that need to be handled [34]. Each team member has limited cognitive resources at their disposal to deliberate on the ideas generated, to communicate their opinions and insights, and to retrieve and apply knowledge and information to assist in this process [35]. Teams facilitated with higher procedural structuring can outperform teams facilitated with lower procedural structuring because parts of their collaboration process e.g., ordering of tasks, is already pre-defined [36]. Therefore, higher levels of procedural structuring should be perceived as useful because it helps teams to efficiently and effectively apply their cognitive resources to the task rather than to the management of the team. Therefore, we propose: Proposition 3: The presence of procedural structuring positively impacts the perceived usefulness of moderation. With the FastFocus and TreasureHunt thinkLets present, a facilitator provides considerable guidance on how teams should work together to converge on ideas. Teams benefit from this coordination support because the facilitator actively guides them through the convergence challenge by continuously guiding them what actions to perform and how to work together. Teams also receive discussion support because the facilitator actively prompts conversations so that the convergence goal will be achieved. Therefore, the team requires less cognitive resources to consider and decide

on how to proceed and how to effectively communicate. This extent of guidance is not the case for convergence techniques such as SelfSifter that do not offer any procedural structuring. We hypothesize: H3a FastFocus teams will report higher usefulness of moderation than SelfSifter teams. H3b TreasureHunt teams will report higher usefulness of moderation than SelfSifter teams. A critical component of a convergence activity is the creation of shared understanding among the team members regarding the ideas under consideration. This can be challenging because at times team members use terminology differently or do not have the same knowledge or expertise regarding the topic at hand [37]. The development of shared understanding is closely related to the development of a Shared Mental Model (SMM) among team members [38]. A SMM is a representation of knowledge structures that are shared among team members [38] and enable team members to find a common ground to describe, explain, and predict concepts and events in their task environment [39]. SMMs may cover four areas [38] comprising, (a) knowledge about equipment and tools; (b) knowledge about the task, goal, and performance requirements; (c) knowledge about other team members’ abilities, knowledge, and skills; and (d) knowledge about appropriate team interactions. SMMs are dynamic. They develop over time through discussion of issues, sharing knowledge, and learning from past mistakes and successes [40, 41]. Such team discussions are a critical prerequisite for shared understanding in teams [13]. We propose: Proposition 4: Perceived shared understanding in teams is a positive function of discussion allowances. The TreasureHunt and SelfSifter thinkLets encourage discussions to take place during the convergence activity as long as they are not evaluative. TreasureHunt teams first select and discuss ideas in pairs before they report their idea to the other team members and the facilitator. While the facilitator cannot provide any guidance to ensure fair and taskfocused discussions during these discussions in pairs, it is expected to result in more polished and wellformulated ideas that enjoy broader shared understanding among (part of) the team. The same effect is expected to apply for SelfSifter teams. In this situation, the team as a whole has ample opportunity to discuss ideas as a team to clarify issues and build common ground as the facilitator gives team members free reign to structure their interactions. In contrast, the structure of the FastFocus thinkLet hardly encourages team discussions. This thinkLet follows a more rigid and formal procedure where individuals have to announce any self-perceived misunderstandings to the group on their own initiative each time an idea is added

584

to the public space. Consequently, we hypothesize that teams with higher levels of discussion should also achieve a higher level of shared understanding: H4a: FastFocus teams will report lower shared understanding than TreasueHunt teams. H4b: FastFocus teams will report lower shared understanding than SelfSifter teams. 4.

During generation, teams engaged in a facilitated brainstorming activity for about 20 minutes before filling out a second survey (MP2). In the last part of the experiment, teams converged on ideas for about 30 minutes by reducing and clarifying the list of previously generated ideas. The teams were facilitated either by SelfSifter, FastFocus, or TreasureHunt. Finally, teams filled out a third questionnaire after convergence (MP3) and were debriefed. The dependent variables include perceived satisfaction with process, perceived satisfaction with product, perceived usefulness of moderation, and perceived shared understanding. Perceived satisfaction with process describes “the extent to which group members perceive themselves as participating in the decision process” [44]. Perceived satisfaction with product describes the meeting outcome that might differ according to the purpose of the meeting [45]. Perceived usefulness of moderation was adapted from the original construct perceived usefulness, which defines “the degree to which a person believes that using a particular system would enhance his or her job performance” [46]. Perceived shared understanding describes “the degree to which people concur on the value of properties, the interpretation of concepts, and the mental models of cause and effect with respect to an object of understanding” [13]. The questionnaire items that were used to measure the dependent variables are provided in the Appendix. The constructs perceived satisfaction with process, perceived satisfaction with product, and perceived usefulness of moderation were measured twice, once after generation (MP2) and once after convergence (MP3). Please note that perceived shared understanding was not measured after generation (MP2) since no discussion took place in the teams during the brainstorming activity. All constructs were measured on a 7-point Likert scale and adapted in their wording to fit the experimental context. All survey data was analyzed with IBM SPSS Statistics 19. Table 2 provides information on the sample. Fifty students were assigned to 12 SelfSifter teams, 44 students were assigned to 11 FastFocus teams, and 44 students were assigned to 11 TreasureHunt teams. We further collected information on experience with collaboration systems (technology knowledge), past participation in facilitated meetings (facilitation knowledge), experience with flooding crises (domain knowledge), and knowledge about team members (past working history).

Study Design

Participants were recruited from an undergraduate Information Systems course at a European university. After a pilot in October 2013, the study was performed in May 2014. The sample consists of 138 students that were randomly assigned to 4-person teams with the exception of one 3-person team and three 5-person teams. Teams worked on a task describing a flooding crisis. The task was based upon an existing task [42] and set into a context understandable to the subjects, which was confirmed in the pilot study. The task represents a decision-making challenge that has no correct answers [43]. Four PhD-students and one postdoc student served as facilitators. They received training how and when to provide the respective facilitation prompts. Additional prompts, e.g., how to start or end a session, were developed to ensure that all groups received similar guidance and to reduce effects that might occur due to individual differences. All teams received collaboration technology support by using ThinkTank. ThinkTank is a collaboration environment that supports activities like brainstorming and voting. We set up spaces, also called activities, for generation and convergence. In the generation activity, four categories, entitled list 1 through 4, were preconfigured. Within each of the lists users were allowed to view, add, and reorder items. Items could only be edited or deleted by those who generated them. In the convergence activity, one category, entitled final list, was preconfigured. All users were allowed to view items but only the facilitator could add, edit, reorder or delete them. The experimental procedure is depicted in Figure 1. It comprised a warm-up task to introduce ThinkTank. Then subjects received the task description, signed the consent form, and filled out the first questionnaire (cf. measurement point (MP) 1 in Figure 1). WARM UP

GENERATION

10 min

CONVERGENCE SelfSifter FastFocus TreasureHunt

20 min

MP1

30 min

MP2

MP3

Figure 1: Overview of the experiment

585

Table 2: Sample description of means (standard deviations) N (Teams) N (Subjects) Technology knowledge Facilitation knowledge Domain knowledge Past working history 5.

SS 12 50 1.02 (.141) 1.88 (.961) 1.35 (.201) 1.34 (.895)

FF 11 44 1.07 (.452) 1.68 (.883) 1.28 (.163) 1.70 (1.05)

on the subjects’ feedback comments provided in the questionnaires. All outliers were deemed satisfactory to be retained. Before imputing missing data, we performed missing data analysis to assess whether data is randomly missing. All constructs but satisfaction with process after generation were found missing completely at random (MCAR). Then we imputed missing data with the EM method and calculated averaged scales for all dependent variables. Reliability of data was assessed with Cronbach’s Alpha (cf. Appendix) for which the threshold of .7 was exceeded by all constructs. Data validity was assessed by investigating the KMO measure for each construct, Bartlett’s test, communalities and component matrices. All common thresholds were reached [47]. We further tested whether the facilitators and their individual styles had effects on the dependent variables. We performed a multivariate analysis of variance (MANOVA) and found no significant differences.

TH 11 44 1 (.000) 1.70 (.823) 1.33 (.207) 1.41 (.844)

Results

This section gives information on data preparation, reliability, validity and presents the results of our experiment, structured into results on collaboration activities (hypotheses 1-3) and results on convergence techniques (hypotheses 4-5).

5.2. Results on Collaboration Activities A 2 (time) x 3 (convergence technique) repeatedmeasures MANOVA was conducted to compare the collected values for satisfaction with process and satisfaction with product (cf. Table 3 for descriptive statistics).

5.1. Data Preparation, Reliability, and Validity We performed univariate outlier analysis and missing data analysis to cleanse the data set. Three potential outliers were investigated in detail, also based

Table 3: Dependent variable means (standard deviations) over measurement points Dependent variable Satisfaction with process Satisfaction with product

MP2 – after generation SS FF TH 5.26 (.71) 4.92 (.75) 5.13 (.95) 5.44 (.74) 5.09 (.81) 5.35 (.66)

There were significant effects for time (Pillais’ Trace = .566, F (2, 134) = 87.50, p = .000, 2= .566). There were no significant effects for convergence technique (Pillais’ Trace = .046, F (4, 270) = 1.575, p = .181, 2= .023) and interaction between time and convergence technique (Pillais’ Trace = .033, F (4, 270) = 1.12, p = .347, 2= .016). Within-subject univariate analyses showed that scores for satisfaction with process (F (1, 135) = 138.159, p = .000, 2= .506) and scores for satisfaction with product (F (1, 135) = 17.119, p = .000, 2= .362) improved significantly from generation to convergence. Thus, hypotheses H1 and H2 are supported. Univariate between-subject analyses showed that none of the convergence techniques, comprising SelfSifter, FastFocus, and TreasureHunt, differed significantly for the scores for satisfaction with process (F (2,135) = 2.891, p = .059, 2= .041) and satisfaction with product (F (2,135) = 2.265, p = .108, 2= .032).

MP3 – after convergence SS FF TH 6.02 (.73) 5.70 (.76) 5.85 (.67) 5.94 (.66) 5.74 (.75) 5.70 (.62)

5.3. Results on Convergence Techniques We performed a MANOVA for testing differences among the treatments SelfSifter, FastFocus, and TreasureHunt on the dependent variables shared understanding and usefulness of moderation after convergence at MP3 (cf. Table 4 for descriptive statistics). The results revealed a significant effect, Pillais’ Trace = .258, F (4,270) = 9.976, p < .000;  2= .129 which is considered a medium effect size [48]. Table 4: Dependent variables means (standard deviations) between convergence techniques Usefulness of moderation Shared understanding 586

SS 4.82 (1.05)

FF 5.41 (.77)

TH 5.61 (.72)

5.80 (.60)

5.15 (.93)

5.57 (.72)

6.

Next, the homogeneity of variance assumption was tested for the two dependent variables. Based on Levene’s F tests, the homogeneity of variance assumption was considered satisfied for usefulness of moderation (p > .05), but not for shared understanding (p = .04). However, related research argues that ANOVA would be still robust as long as the largest standard deviation is not more than four times the size of the corresponding smallest standard deviation [49]. This condition was satisfied and therefore we proceeded with the analysis. As illustrated in Table 5, univariate independent one-way ANOVAs showed significant main effects for usefulness of moderation and shared understanding. Usefulness of moderation (i.e.,  2=.135) as well as shared understanding (i.e.,  2=.115) reached a medium effect size [48].

Our findings offer broad support for the hypothesized effects of convergence as well as the effects of procedural structuring and discussion allowances provided by the three investigated convergence techniques. Based on our findings we discuss a number of implications for research and practice. We close this section with a discussion of limitations of the study and directions for future research. 6.1. Convergence Matters The results clearly show that team members feel that convergence is important and that it makes a significant contribution to the performance of their problem-solving and decision-making tasks. Across treatments, team members reported higher levels of satisfaction with process and with product after convergence than after generation.

Table 5: One-way ANOVA with post hoc results F Usefulness of moderation Shared understanding

P

partial Post hoc  Bonferroni 10.556 .000 .135 TH > SS; FF > SS 8.773 .000 .115 SS > FF; TH > FF

6.2. Procedural Structuring is Appreciated Using TreasureHunt and FastFocus, the facilitator guides team members to coordinate their activities and to collectively discuss meaning of ideas. The facilitator prompts team members to stay task-focused, avoid redundancies, clarify meaning, and look for more detail or abstraction in an idea. The role of the facilitator in SelfSifter is to clearly specify the expected goal and product. Otherwise, the facilitator is silent and does not guide or take part in the task discussions. Consequently, our results suggest that teams consider a high level of procedural structuring useful in order to reduce and clarify ideas.

The post-hoc analysis in Table 5 examines individual mean difference comparisons across the three treatments and the two dependent variables. Bonferroni correction was applied to control the familywise error rate due to multiple comparisons [47]. For usefulness of moderation, the results show that FastFocus outperforms SelfSifter (p = .004) and TreasureHunt outperforms SelfSifter (p = .000), thus H3a and H3b are supported. For shared understanding, the results show that TreasureHunt outperforms FastFocus (p = .033) and SelfSifter outperforms FastFocus (p = .000), thus supporting H4a and H4b. Table 6 summarizes the results of the tested hypotheses.

6.3. Free Discussions are Appreciated Our results also suggest that discussions are a critical part of a convergence activity. TreasureHunt and SelfSifter teams reported higher levels of shared understanding than FastFocus teams. Interestingly, a combination of high procedural structuring and discussion allowances, i.e. TreasureHunt, did not result in the highest levels of shared understanding. SelfSifter teams perceived the highest level of shared understanding, which could result from the unrestricted discussions they had in their teams. They might have even drifted into other collaboration patterns, such as generation or evaluation, or lost their task goal out of sight due to the low levels of procedural structuring. In the setting with low discussion allowances, i.e. FastFocus, shared understanding is comparably low as facilitators considerably restrict the amount and timing

Table 6: Summary of hypotheses tests Hypotheses Dependent variable H1 Satisfaction with process (at MP2 and MP3) H2 Satisfaction with product (at MP2 and MP3) H3a Usefulness of moderation FF > SS H3b Usefulness of moderation TH > SS H4a Shared understanding FF < TH H4b Shared understanding FF > SS

Discussion and Conclusions

Support? Yes Yes Yes Yes Yes Yes

587

R.O., Antunes, P., and De Vreede, G.-J.): Groupware: Design, Implementation, and Use, Springer, BerlinHeidelberg, 2008, pp. 204-216. [7] Reiter-Palmon, R., and Illies, J.J., "Leadership and Creativity: Understanding Leadership from a Creative Problem-Solving Perspective", The Leadership Quarterly, 15(1), 2004, pp. 55-77. [8] Kennel, V., Reiter-Palmon, R., De Vreede, T., and De Vreede, G.-J., "Creativity in Teams: An Examination of Team Accuracy in the Idea Evaluation and Selection Process", Proceedings of the 46th Hawaii International Conference on System Sciences, 2013, pp. 630-639. [9] Dennis, A.R., Wixom, B.H., and Vandenberg, R.J., "Understanding Fit and Appropriation Effects in Group Support Systems Via Meta-Analysis", MIS Quarterly, 25(2), 2001, pp. 167-193. [10] Den Hengst, M., and Adkins, M., "Which Collaboration Patterns Are Most Challenging: A Global Survey of Facilitators", Proceedings of the 40th Annual Hawaii International Conference on System Sciences, 2007, pp. 248257. [11] Briggs, R.O., and De Vreede, G.J., Thinklets: Building Blocks for Concerted Collaboration, University of Nebraska, Center for Collaboration Science, Omaha, NE, 2009. [12] Massey, A.P., Montoya-Weiss, M.M., and Hung, Y.-T., "Because Time Matters: Temporal Coordination in Global Virtual Project Teams", Journal of Management Information Systems, 19(4), 2003, pp. 129-155. [13] Bittner, E.A.C., and Leimeister, J.M., "Creating Shared Understanding in Heterogeneous Work Groups: Why It Matters and How to Achieve It", Journal of Management Information Systems, 31(1), 2014, pp. 111-143. [14] Espinosa, J.A., Cummings, J.N., and Pickering, C., "Time Separation, Coordination, and Performance in Technical Teams", IEEE Transactions on Engineering Management, 59(1), 2012, pp. 91-103. [15] Limayem, M., and Desanctis, G., "Providing Decisional Guidance for Multicriteria Decision Making in Groups", Information Systems Research, 11(4), 2000, pp. 386-401. [16] Van Den Bossche, P., Gijselaers, W., Segers, M., Woltjer, G., and Kirschner, P., "Team Learning: Building Shared Mental Models", Instructional Science, 39(3), 2011, pp. 283-301. [17] Tan, B., Wei, K.-K., and Lee-Partridge, J., "Effects of Facilitation and Leadership on Meeting Outcomes in a Group Support System Environment", European Journal of Information Systems, 8(4), 1999, pp. 233-246. [18] Dennis, A.R., and Wixom, B.H., "Investigating the Moderators of the Group Support Systems Use with MetaAnalysis", Journal of Management Information Systems, 18(3), 2002, pp. 235-257. [19] Griffith, T.L., Fuller, M.A., and Northcraft, G.B., "Facilitator Influence in Group Support Systems: Intended and Unintended Effects", Information Systems Research, 9(1), 1998, pp. 20-36. [20] Schein, E.H., "The Role of the Consultant: Content Expert or Process Facilitator?", The Personnel and Guidance Journal, 56(6), 1978, pp. 339-343. [21] Clawson, V.K., Bostrom, R.P., and Anson, R., "The Role of the Facilitator in Computer-Supported Meetings", Small Group Research, 24(4), 1993, pp. 547-565.

of discussions. Consequently, our results show that convergence techniques providing discussion allowances drive shared understanding. 7.

Limitations and Future Research

The study was designed in such a way that subjects had to first generate ideas and then converge on their own set of ideas. This carries a potential risk as some teams might not generate many ideas and therefore only have a limited set of ideas to converge on. Past research found that teams could start from a standard set of ideas not generated by themselves and still have effective convergence [26]. We plan to investigate this avenue in future studies. Another limitation refers to the kind of data used for data analysis. This study concentrated on the analysis of perceptions rather than taking into account externally rated performance measures for analyzing the factual team output. Future research could provide implications for practice as these kinds of analyses are necessary to offer a well-rounded picture of convergence thinkLets. Finally, the analyses in our current study focused predominantly on the creation of shared understanding during a convergence activity. The other aspects of convergence (filtering, abstracting or synthesizing) were not investigated. Such an investigation may focus, for example, on the reduction rates of the resulting idea sets between generation and convergence or the extent to which teams synthesized ideas under the three facilitation conditions. 8.

References

[1] Salas, E., Cooke, N.J., and Gorman, J.C., "The Science of Team Performance: Progress and the Need for More…", Human Factors: The Journal of the Human Factors and Ergonomics Society, 52(2), 2010, pp. 344-346. [2] Kerr, D.S., and Murthy, U.S., "Divergent and Convergent Idea Generation in Teams: A Comparison of ComputerMediated and Face-to-Face Communication", Group Decision and Negotiation, 13(4), 2004, pp. 381-399. [3] Kolfschoten, G.L., and Brazier, F.M., "Cognitive Load in Collaboration: Convergence", Group Decision and Negotiation, 22(5), 2013, pp. 975-996. [4] Girotra, K., Terwiesch, C., and Ulrich, K.T., "Idea Generation and the Quality of the Best Idea", Management Science, 56(4), 2010, pp. 591-605. [5] De Vreede, G., and Briggs, R.O., "Collaboration Engineering: Designing Repeatable Processes for HighValue Collaborative Tasks", Proceedings of the 38th Annual Hawaii International Conference on System Sciences, 2005, pp. 1-10. [6] Davis, A., Badura, V., De Vreede, G.-J., and Read, A.S., "Understanding Methodological Differences to Study Convergence in Group Support System Sessions", in (Briggs,

588

of Electronic Group Support Technology", Managemt Information Systems Department, vol. PhD. University of Arizona, Tucson, 1994, pp. 13-252. [36] Lowry, P.B., Nunamaker, J.F., Curtis, A., and Lowry, M.R., "The Impact of Process Structure on Novice, Virtual Collaborative Writing Teams", IEEE Transactions on Professional Communication, 48(4), 2005, pp. 341-364. [37] Stahl, G., "A Model of Collaborative KnowledgeBuilding", Proceedings of the 4th International Conference of the Learning Sciences, 2000, pp. 70-77. [38] Cannon-Bowers, J.A., Salas, E., and Converse, S.A., "Shared Mental Models in Expert Team Decision Making", in (Castellan, N.J.J.): Current Issues in Individual and Group Decision Making, Erlbaum, Hillsdale, NJ, 1993, pp. 221-246. [39] Mathieu, J.E., Heffner, T.S., Goodwin, G.F., Salas, E., and Cannon-Bowers, J.A., "The Influence of Shared Mental Models on Team Process and Performance", Journal of Applied Psychology, 85(2), 2000, pp. 273-283. [40] Burke, C.S., Stagl, K.C., Salas, E., Pierce, L., and Kendall, D., "Understanding Team Adaptation: A Conceptual Analysis and Model", Journal of Applied Psychology, 91(6), 2006, pp. 1189-1207. [41] West, M.A., and Anderson, N.R., "Innovation in Top Management Teams", Journal of Applied Psychology, 81(6), 1996, pp. 680-693. [42] Santanen, E.L., Briggs, R.O., and Vreede, G.-J.D., "Causal Relationships in Creative Problem Solving: Comparing Facilitation Interventions for Ideation", Journal of Management Information Systems, 20(4), 2004, pp. 167-197. [43] Mcgrath, J.E., Groups: Interaction and Performance, Prentice-Hall, Inc., New Jersey, 1984. [44] Green, S.G., and Taber, T.D., "The Effects of Three Social Decision Schemes on Decision Group Process", Organizational Behavior and Human Performance, 25(1), 1980, pp. 97-106. [45] Reinig, B.A., "Toward an Understanding of Satisfaction with the Process and Outcomes of Teamwork", Journal of Management Information Systems, 19(4), 2003, pp. 65-84. [46] Davis, F.D., "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology", MIS Quarterly, 1989, pp. 319-340. [47] Hair, J.F.J., Black, W.C., Babin, B.J., and Anderson, R.E., Multivariate Data Analysis: A Global Perspective, Pearson Education, New Jersey, 2010. [48] Sheskin, D.J., Handbook of Parametric and Nonparametric Statistical Procedures, Taylor & Francis, 2007. [49] Howell, D., Statistical Methods for Psychology, Wadsworth Cengage Learning, 8th edn, Belmont, CA, 2012. [50] Tan, B.C.Y., Wei, K.K., and Lee-Partridge, J.E., "Effects of Facilitation and Leadership on Meeting Outcomes in a Group Support System Environment", European Journal of Information Systems, 8(4), 1999, pp. 233-246. [51] Venkatesh, V., Morris, M.G., Davis, G.B., and Davis, F.D., "User Acceptance of Information Technology: Toward a Unified View", MIS Quarterly, 27(3), 2003, pp. 425-478.

[22] Aakhus, M., "Technocratic and Design Stances toward Communication Expertise: How GDSS Facilitators Understand Their Work", Journal of applied communication research, 29(4), 2001, pp. 341-371. [23] Briggs, R.O., Kolfschoten, G.L., De Vreede, G.-J., Albrecht, C., Lukosch, S., and Dean, D.L., "A Six-Layer Model of Collaboration", in (Nunamaker, J.F.J., Romano, N.J., and Briggs, R.O.): Collaboration Systems: Concept, Value, and Use, ME Sharp, Armonk, NY, 2013, pp. 211-227. [24] Briggs, R.O., De Vreede, G.J., and Nunamaker, J.F., "Collaboration Engineering with Thinklets to Pursue Sustained Success with Group Support Systems", Journal of Management Information Systems, 19(4), 2003, pp. 31-64. [25] De Vreede, G.J., Briggs, R.O., and Massey, A.P., "Collaboration Engineering: Foundations and Opportunities: Editorial to the Special Issue on the Journal of the Association of Information Systems", Journal of the Association for Information Systems, 10(3), 2009, pp. 121137. [26] Davis, A., De Vreede, G.-J., and Briggs, R.O., "Designing Thinklets for Convergence", AMCIS 2007 Proceedings, 2007 [27] Wheeler, B.C., and Valacich, J.S., "Facilitation, GSS, and Training as Sources of Process Restrictiveness and Guidance for Structured Group Decision Making: An Empirical Assessment", Information Systems Research, 7(4), 1996, pp. 429-450. [28] Pentland, A., "The New Science of Building Great Teams", Harvard Business Review, 90(4), 2012, pp. 60-69. [29] Fiore, S.M., Smith-Jentsch, K.A., Salas, E., Warner, N., and Letsky, M., "Towards an Understanding of Macrocognition in Teams: Developing and Defining Complex Collaborative Processes and Products", Theoretical Issues in Ergonomics Science, 11(4), 2010, pp. 250-271. [30] Reinig, B.A., Briggs, R.O., and De Vreede, G.-J., "Satisfaction as a Function of Perceived Change in Likelihood of Goal Attainment: A Cross-Cultural Study", International Journal of E-Collaboration, 5(2), 2009, pp. 6174. [31] Briggs, R.O., Reinig, B.A., and De Vreede, G.-J., "The Yield Shift Theory of Satisfaction and Its Application to the IS/IT Domain", Journal of the Association for Information Systems, 9(5), 2008, pp. 267-293. [32] Miranda, S.M., and Bostrom, R.P., "Meeting Facilitation: Process Versus Content Interventions", Proceedings of the 30th Annual Hawaii International Conference on System Sciences, 1997, pp. 124-133. [33] Den Hengst, M., and Adkins, M., "The Demand Rate of Facilitation Functions", Proceedings of the 38th Annual Hawaii International Conference on System Sciences, 2005, pp. 1-10. [34] Briggs, R.O., Nunamaker, J.F., and Sprague, R.H., "1001 Unanswered Research Questions in GSS", Journal of Management Information Systems, 14(3), 1998, pp. 3-22. [35] Briggs, R., "The Focus Theory of Group Productivity and Its Application to the Design, Development, and Testing

589

Appendix Construct, Cronbach’s  (r)1 Perceived satisfaction with the product after generation [45], .827 (.624)

Items

I am very satisfied with the quality of our ideas. The ideas reflect my inputs to a great extent. I feel to a great extent committed to the ideas of the team. I am to a great extent confident that the ideas of the team are correct. I feel to a great extent personally responsible for the correctness of the ideas.* Perceived satisfaction I am very satisfied with the quality of the final list of supporting measures. with the product after The final list of supporting measures reflects my inputs to a great extent. convergence I feel to a great extent committed to the final list of supporting measures of the team. [45], .834 (.749) I am to a great extent confident that the final list of supporting measures of the team are correct. I feel to a great extent personally responsible for the correctness of the final list of supporting measures.* Perceived satisfaction The process of idea generation was efficient. with the process after The process of idea generation was coordinated. generation The process of idea generation was fair. [50], .779 (.466) The process of idea generation was understandable. The process of idea generation was satisfying. Perceived satisfaction The process of clarifying and reducing ideas was efficient. with the process after The process of clarifying and reducing ideas was coordinated. convergence The process of clarifying and reducing ideas was fair. [50], .872 (.701) The process of clarifying and reducing ideas was understandable. The process of clarifying and reducing ideas was satisfying. Perceived usefulness of The moderation was useful for collecting ideas. moderation after The moderation lead to a quicker collection of ideas. generation The moderation increases my productivity when collecting ideas. [51], .931 (.820) The moderation increases my chances of good results when collecting ideas. Perceived usefulness of The moderation was useful for reducing ideas. moderation after The moderation lead to a quicker reduction of ideas. convergence The moderation increases my productivity when reducing ideas. [51], .932 (.731) The moderation increases my chances of good results when reducing ideas. The moderation was useful for clarifying ideas. The moderation lead to a quicker clarification of ideas. The moderation increases my productivity when clarifying ideas. The moderation increases my chances of good results when clarifying ideas. Shared understanding, My team looked for different interpretations of a problem. .828 (.524) My team communicated with other teammates while reaching the decision. My team used a common vocabulary in the discussions. My team consistently demonstrated effective listening skills. My team shared information. Everybody in my team strived to express his or her opinion. My team often utilized different opinions for the sake of obtaining optimal outcomes. * Dropped items 1 Lowest corrected item-total correlation

590