CoDesign, Vol. 00, No. 0, Month 2005, 1 – 17
An experimental methodology for investigating communication in collaborative design review meetings KAREN J. OSTERGAARDy, WILLIAM R. WETMORE IIIz, AMEYA DIVEKARx, HENRY VITALI} and JOSHUA D. SUMMERS*k yBorgWarner, Asheville, North Carolina, USA zBosch, Rutesheim, Germany xParametric Technologies Corporation, Pune, India }New South Lumber Company, Camden, South Carolina, USA kDepartment of Mechanical Engineering, Clemson University, 250 Fluor Daniel, Clemson, SC 29634-0921, USA (Received 16 March 2005; in ﬁnal form 10 August 2005)
Product design teams, composed of individuals with diverse expertise, communicate continually throughout the realization process, especially during formal collaborative design reviews. Individuals with the required expertise for conducting design reviews may be distributed in diﬀerent geographic locations requiring either expensive travel or new communication tools to simulate onsite collaborative communication. This paper presents the methods and results of a controlled user study devised to examine the eﬀectiveness of various communication methods for design reviews. Speech-only, text-only, and face-to-face communication methods were chosen to simulate current technologies commonly used in situations of geographic distribution: phone conference, text chat, and onsite meetings, respectively. Primary results from the study include the following: group design reviews in all modes of communication were approximately twice as eﬀective as individual design reviews; face-to-face communication produced a greater perceived eﬀectiveness than speech-only communication; and speech-only communication produced a greater perceived eﬀectiveness than text-only communication. Despite this participant self-reported perceived eﬀectiveness, there was no statistically signiﬁcant diﬀerence between the measured eﬀectiveness of conducting design reviews under diﬀerent communication-mode scenarios. Review eﬀectiveness was evaluated by a quantitative measure of the number of ﬂaws, nonconformance to the speciﬁed requirements that are identiﬁed and documented by the design review team. These ﬂaws were intentionally integrated into the product to be reviewed for experimental control.
*Corresponding author. Email: [email protected]
CoDesign ISSN 1571-0882 Print/ISSN 1745-3755 online ª 2005 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/15710880500298520
K. J. Ostergaard et al.
Keywords: Design review; Collaborative design; Communication mode; User study methodology
1. INTRODUCTION Design reviews, collaborative eﬀorts, are conducted in order to monitor and verify product quality, robustness, and conformance to customer-speciﬁed function. Two types of design reviews have been identiﬁed in the literature: selective and evaluative (Wetmore 2004). Selective reviews, such as customer-based metrics, are used to select between alternatives presented, while evaluation tools, such as failure modes eﬀects analysis (FMEA) and active reviews for intermediate designs (ARID), are used to evaluate a single design against its conformance with the speciﬁcations and requirements (Parnas and Weiss 1985, Teng 1996, Kirschman and Fadel 1997, Cotnareanu 1999, Clements 2000). In evaluative reviews, the focus of this investigation, the design team seeks to identify areas of concern, those requiring further development or modiﬁcation, or preventing ultimate acceptance of the overall solution. While identifying these errors, the team also may document generated alternatives, directions, or justiﬁcations for why the ﬂaw is of concern. Design reviews are typically conducted at multiple stages in the design process in a wide range of industries: from automotive to aerospace, and from biomedical to software. It has been found that the earlier the review is conducted, the less likely expensive design changes will have to be made (Sater-Black and Iverson 1994). Design review participants may be collocated or distributed across a variety of boundaries: geographic, organizational, and temporal. While distribution may be desired or necessary to achieve an ideal combination of expertise in the team, the dispersion may also make it too diﬃcult or costly to realize the beneﬁts of this expertise. These experts may include the product’s customer, component suppliers, development team, and manufacturing team. One solution to this problem of distribution is to use communication tools (video conferencing, phone conferencing, Internet chat, etc.) to simulate the desired collocation of team members. The goal of this investigation is to study the impact of the communication method on the success in identifying design errors of synchronous design reviews. The communication scenarios compared were speech-only, text-only, and face-to-face communications. Individual reviews were also conducted as a comparison against the group design reviews to highlight the utility of conducting reviews in groups. This research is a ﬁrst step in studying the design review process with an eventual goal of determining when and how distributed collaborative design should be facilitated by improved communication/ information-sharing tools. This research exercise was conducted via a controlled user study where participants performed design reviews using prescribed communication methods. The problems, errors, or ﬂaws identiﬁed during the reviews were used as the quantiﬁable basis for comparison of design review eﬀectiveness.
2. Background 2.1
Design reviews are conducted throughout the product realization process to identify technical risks in performance, manufacturing, testing, and use (QRAM 2001). The primary objective is typically to ensure that the design is in conformance with its
Communication in collaborative design review meetings
requirements with a secondary objective of highlighting potential deﬁciencies in the design as viewed by the various stakeholders or participants in the design review. The design review process is considered critical to reducing risk in many design projects, and it can provide the discipline necessary for timely identiﬁcation of design problems and their solutions. While the time at which reviews are conducted may vary depending on the product, company, or team members, reviews are typically an iterative process in which earlier stages of the design process are revisited. The ﬁrst step in conducting a design review is to identify the participating members by listing the characteristics of the design and identifying the resources needed for the characteristics to be discussed (Pugh 1991). For example, if an existing product is being reviewed for manufacturability, areas such as ergonomics and legislation need not be considered. However, if these subjects are related to characteristics of interest, the associated parties should be included. By utilizing this method, parties with the appropriate expertise can be correctly identiﬁed for the review. The required expertise represented by the identiﬁed participants may be distributed across diﬀerent geographic locations of an organization. This distribution of expertise can exist in the form of suppliers, manufacturing facilities, and customers that are located in varying locations around the globe. Consequently, design review teams are facing new challenges in eﬀective communication such as with task eﬃciency and eﬀectiveness of high-level decision-making (Hilts et al. 1986, Bordia 1997, Olson and Olson 2000, Wierba et al. 2001). To address these challenges, new collaboration technologies are necessary to overcome the inherent resistance to the ﬂow of information encountered by distributed design teams (Case and Lu 1996). Speciﬁc tools exist to aid the participants of a design review in meeting their objective. The tools are created for two distinct functions: evaluating new concepts and auditing a current design. Evaluation and design selection tools are used to identify and rate the most appropriate design solution created during the novel design or idea generation processes. A matrix that enables the systematic rating of designs based on conformances to prescribed requirements is an example of an evaluation tool (Ullman 2003). Audit tools are used to quantify, verify, initiate, and justify changes in a design. An audit tool, such as a design review checklist, allows the participants of the review to systematically check for conformance between the selected design and the customerderived product-design speciﬁcation (PDS) (Pahl and Beitz 1996). The PDS is a document that deﬁnes the primary function, design constraints, and design criteria. A strong conformance suggests that the design is suﬃcient to proceed in the design process. Other types of audit tools, typically classiﬁed as ‘design for X’ tools, seek to identify and verify conformance to certain objectives such as design for automated assembly, customer, manufacture, injection molding, robot assembly, manual assembly, and robustness (Dixon and Poli 1995). These types of audit design review tools serve to highlight particular design data in design review meetings in order to identify and justify changes before a design advances to the manufacturing phase. Group cohesion, or participant familiarity with each other, can be seen as both a beneﬁt and a potential drawback. When a group has members that are highly cohesive (lack of conﬂict, strong personal relationships), eight symptoms can result, describing the conditions of groupthink, notably an illusion of invulnerability where groups are led to believe they are incapable of error, and they avoid obvious danger signs and rationalization of poor decisions (Griﬃn 1997, Wetmore and Summers 2003). Each of these can have a signiﬁcant impact on the eﬀectiveness of a design review. Conversely, group cohesion is also identiﬁed as a constructive aspect in small-group activities where
K. J. Ostergaard et al.
research shows a positive relationship between group performance and cohesion (Chansler et al. 2003). The dynamic and often ephemeral nature of design review teams is a challenge to developing group cohesiveness. This issue of group cohesion in design reviews is investigated in other studies (Wetmore and Summers 2004). 2.2
Communication in design
Communication is essential both between participants during the review and between the participants and those tasked with resolving the identiﬁed design errors after the review. Eﬀective communication tools, such as face-to-face collocated meetings, video conferences, telephone conferencing, e-mail, and Internet chat, allow distributed or collocated designers to collaboratively discuss issues of design. The level of interaction is limited by the bandwidth of the communication method utilized (Hammond 2001). For example, face-to-face discussions can provide signals to other participants of the group via the ﬁve senses. However, in the other communication forums where only one or two senses are engaged, such as web-based chat, video, and teleconferencing, the potential bandwidth and resulting eﬃciency in information transfer are reduced. As this information exchange decreases, group members alter the nature of their communication processes. They seek to maintain a comfortable level of communication by utilizing compensating mechanisms, such as increasing mental eﬀort or limiting the amount of data considered. Bordia proposed that groups using text-based computer-mediated communication for idea generation take longer than face-to-face teams to complete a given task and produce fewer remarks in a given time period (Bordia 1997). These limitations are considered technical, since they are a result of using typing as a communication tool. Consequences of these limitations include frustration and use of an altered language (more task-oriented and less social-emotional). Some evidence suggests that these are not permanent limitations, though, since performance of the computer-mediated groups nears that of groups using face-to-face communication given enough time to adapt to the technology. Case studies of distributed collaboration in architectural design ﬁrms and graduatelevel design studies have been used to compare communication (Chiu 2000). In the graduate studios, 50% of communication conducted was related to solving design problems, and 50% was related to deﬁning the design problem. The case study also identiﬁed that 50% of the project’s total time was spent in communication with other project-associated parties, and the frequency of communication between the parties was three to ﬁve times per day. The primary mode of communication was the Internet, which provided a platform for representing and storing all design information and supported asynchronous activity through access to various design information. The synchronous collaboration was supported through video conferencing, shared whiteboard, and shared CAD systems. In the architectural design ﬁrms, 78% of communication was related to solving the problems, while 21% of communication was related to deﬁning the problem. The study also noted that 64% of those surveyed from the architectural ﬁrm thought that ineﬀective feedback during design communication was caused by unclear design information that required further explanation. The amount of time spent in communication during the design consumed 40% of total design time, with individuals communicating every 1 – 3 days. During asynchronous communication, design representation included verbal description, sketches, tables, and photographs. In synchronous communication, participants preferred the use of visual presentation plus oral communication. This included the use of fax and telephone for geographically distributed
Communication in collaborative design review meetings
parties. However, interviews with the designers emphasized the importance of face-toface contact. This type of exchange is lacking in the synchronous communication methods noted above. Dispersion of team members may have a signiﬁcant impact on the team’s choice of communication techniques, frequency of communication, and language (Lurey and Raisinghani 2001). The results of research conducted to compare the graphic communication of distributed teams with those of collocated design teams show that remote designers spent 51% more time making graphic acts (drawings, sketches, etc.) than their collocated counterparts (Garner 2001). However, despite spending more time in the act of sketching, the actual production of drawings and sketches decreased signiﬁcantly when teams were distributed. One possible explanation may be that the designers in the computer-mediated situation used the stylus to both sketch and focus the attention of their partners. The availability of communication resources may also be inhibited primarily by geographic and organizational boundaries. Cohesion and eﬃcient operation in distributed design teams require more computational design support versus the needs of non-distributed teams as evidenced in distributed agent design work (Lees et al. 2001). This is further supported by research in communication impedance, or the diﬃculty in communicating when teams are not collocated (Case and Lu 1996). Social presence is the ability to make users feel as if they are collocated with their communicative partners (Carletta 2000). The eﬀectiveness of technologies, such as video conferencing, which attempt to simulate a collocated environment, is hindered because visual cues for turn taking are less perceptible via a television. Participants tend to feel more distant if the communication technology is not in real time. This lack of social presence reduces social interaction and thereby weakens group solidarity and commitment. However, other research has found that group inhibitions are reduced when parties are separated, and they are more likely to participate (Lim and Benbasat 1997). This is in contrast to collocated parties in which lower-status parties are less likely to contribute. An additional factor of interest in design communication is the eﬀect of familiarity of group members on the frequency, quality, and formality of exchanges (Smolensky et al. 1990, Orengo Castella et al. 2000). In a study to investigate the inﬂuence that familiarity, group atmosphere, and assertiveness have upon uninhibited group behaviour based upon mode of communication, researchers found that group familiarity inﬂuences the informal dialog exchanges in face-to-face and computer-mediated communication (video conferencing or text exchange) (Orengo Castella et al. 2000). However, this work focused on the frequency of a type of communication, not on the eﬀectiveness that communication and familiarity variables have upon group decision-making. Research has focused on developing communication tools and software to facilitate collaborative design reviews and other design activities. This software is typically designed to mimic the communication achieved with face-to-face synchronous communication, since it is presumed to be more eﬀective than typical methods used by distributed parties. One example is the development of a system for Distributed Design Review in Virtual Environments (DDRIVE) by HRL Laboratories and General Motors Research and Development Center (Daily et al. 2000). DDRIVE allows users to communicate in 2-D and immersive 3-D environments. A more readily available, but less powerful, tool is Microsoft’s NetMeeting, which enables 2-D desktop collaboration with text, video, and audio features. Research suggests that of the diﬀerent forms of communication, audio is the most pivotal for engineering design, suggesting that the
K. J. Ostergaard et al.
work done in developing new collaborative design tools should include and strengthen auditory modes of communication (Kirschman and Greenstein 2002). While there is much study of communication modes in engineering design, much of this work has focused on either individual modes (sketching face-to-face vs. sketching through computer mediation) or on the resulting frequency of communication in diﬀerent modes. While important in providing guidance in developing new computer-supported collaborative work systems, these studies tend to deemphasize the comparison of the results of the communication in design tasks. Just as important to understanding the frequency of information exchange and the uninhibited nature of that exchange is the understanding of whether diﬀerent modes of communication actually change the eﬀectiveness or productivity in a design task. Rather than looking at the entire product realization process, this paper proposed an experimental methodology to investigate eﬀects of communication modes on the ability of groups of designers to capture design inadequacies in structured collaborative design reviews. 3. User study methodology The research presented here is conducted via a controlled user study which allows researchers to identify particular variables of interest and observe the impact on the result of varying that factor (McKoy et al. 2001, Shah et al. 2001, Kirschman and Greenstein 2002). These variables are used in designing experiments to assess speciﬁc inﬂuences in a controlled environment that simulates portions (or scales) of real situations. This is in contrast to protocol studies, which are used to observe processes or procedures (Adelson 1989, Simoﬀ and Maher 2000, Costa and Sobek 2004). While limited quantitative information may be gained, this type of study does provide descriptive qualitative results. Popular user study methods include surveys, focus groups, interviews, observation, and diary methods (Brewerton and Millward 2001). In order to study the eﬀects of distribution on the eﬀectiveness of design reviews, a user study was designed to simulate available communication methods for distributed collaboration in three scenarios with communication mode as the design variable: (1) face-to-face communication, (2) audio communication, and (3) real-time text communication. A fourth scenario consisting of students working independently was studied to allow for a comparison with individual design reviews. In the ﬁrst scenario, characterized by face-to-face communication, design teams were sequestered in a conference room and allowed to use speech, textual, graphical, and gesture forms of communication, thus simulating a collocated team. No additional tools such as textbooks or computers, or communication methods such as computerized white boards were permitted. In the second scenario, teams were allowed to communicate via speech only. Team members were physically located in the same room but were separated by partitions to prevent use of visual communication methods. This scenario simulated designers communicating via telephone or other methods limited to voice communication. Note that this method was ‘full-duplex’ because team members could send and receive data at the same time (i.e. more than one team members could speak simultaneously). Conversely, some communication methods, such as traditional speakerphones for phone conferencing and some Internet voice communication programs, are half-duplex. With these systems, only one user can transmit voice data at a time. Teams in the third scenario were restricted to textual communication via a computer text-chat program. Text chat occurs in an environment in which users can type messages
Communication in collaborative design review meetings
to be viewed by all other current users. Messages do not appear in the shared environment in real time. Instead, the user writing a message must submit their message to make it viewable by all users. While users in this scenario were located in the same computer lab, they were not permitted to communicate with speech or gestures. In addition to these scenarios of primary interest, individuals completed the same design review problem to provide a comparison between group reviews with various communication methods and reviews completed by a single reviewer. Table 1 provides a summary of the variables of study, the replications for each scenario, and the number of participants per replication and scenario. The design review teams included either ﬁve or six participants. Each scenario was replicated twice. Nine individuals were used to study the eﬀectiveness of using groups for design reviews. 3.1
Participants in this user study were drawn from a sophomore mechanical engineering course introducing students to tools and methods of engineering design. All participants had previous experience with the selected communication methods. For most of the participants, the only previous experience with design reviews was an instructional lecture conducted in a class session prior to the user study. During the design review lecture, students were provided with an overview of design reviews, were introduced to the basic design documents and checklists used in reviews, and participated in an in-class practice design review using these documents. Three of the participants had previous, but minimal (*1 h), exposure to design reviews, speciﬁcally FMEA, in industrial settings through school sponsored co-op programmes or internships. All teams were randomly selected, and a recorder was randomly assigned to each team to facilitate data collection. Issues such as gender, race, and expertise were not considered in selecting or organizing the participants. Note, however, that as these variables are not fully controlled, they may have an eﬀect on the results of the study. Further, the diversity of students in this course is typical of most mechanical engineering courses at Clemson University where the majority of the students are white and male. 3.2
A design problem, focused on the design of automotive pliers, was devised for analysis via a design review. Associated documentation created for the design problem and supplied to the participants for the review included design drawings, a PDS, and a bill of material. The PDS provided information such as functions and design constraints.
Summary of user study scenarios.
Participants per replication
1 2 1 2 1 2
5 5 6 6 5 5 9 individuals
K. J. Ostergaard et al.
Design inadequacies were intentionally included in these documents in order to provide a base set of errors or ﬂaws that the teams should attempt to identify. For example, design errors associated with the assembly drawing are highlighted in ﬁgure 1. All design inadequacies were categorized via problem decomposition, shown in table 2. Note that each problem was assigned a relative weighting according to ease of identiﬁcation, with higher weightings assigned to problems that required use of multiple documents, were not explicitly stated in the documents, or required in-depth understanding of a technical subject. A scale of 1 – 4 – 9 was used to provide granularity between low, medium, and high weights. A panel of four graduate engineering design students judged the ease of identiﬁcation of the problems and assigned weights accordingly. In addition to the design documents noted above, a design review checklist was devised to facilitate identiﬁcation and recording of design problems. The checklist was based on examples and recommendations by Phal and Beitz (1996). A completed design review checklist is shown in ﬁgure 2. Speciﬁc design problems identiﬁed by this team are highlighted and detailed in table 3. A calibration study was conducted to verify that the design problem was clear, the time allotted (35 min) was suﬃcient, and design inadequacies were appropriate. Participants in the calibration study, four graduate mechanical engineering design students, conducted design reviews individually. While the expertise of these participants diﬀered from participants in the user study, it was felt that the calibration study did validate the suitability of the design inadequacies and allotted time. Feedback from the calibration study also resulted in edits to the product-design speciﬁcation, checklist, and design drawings to clarify instructions and usability of the documents. As noted above, the participants were randomly divided into groups without regard to gender, race, expertise, or personality. A graduate student observer was assigned to each group and given the responsibilities of administering the scenario, ensuring basic rules of
vers d e t in ne r pr or onli o f f no Mo Colour
Figure 1. Assembly drawing for pliers design with design errors noted.
Communication in collaborative design review meetings Table 2. Type
Problem decomposition for pliers design.
4 4 4 9 9
9 4 9 9 9
BOM Material Ergonomic
1 1 4 4 1
ID Jaw surfaces are of unsuitable design for gripping Lever arm to jaws is longer than lever arm to the end of the handle—loss of mechanical advantage No locking mechanism to secure the pin Jaws do not contact each other when handles are closed Jaw surfaces will not be parallel when handles are closed Too small for application Insuﬃcient jaw opening Tolerance issue with pin and lever through hole System of units mentioned on drawing is inconsistent with measurements Insuﬃcient dimensions for manufacturing to build parts Quantity error in bill of material Cost exceeds constraint Inappropriate material speciﬁed for handle Inappropriate material speciﬁed for pin Handles are not of ergonomic design
F1 F2 F3 F4 F5 F6 F7 D1 D2 D3 B1 B2 M1 M2 E1
communication were followed, and recording general observations about the behaviour and progress of the design review team. The observer distributed identical design documents to each member of the team and instructed them to submit a single list of problems at the conclusion of the review. A team recorder, randomly selected before the study, was instructed to compile the submitted list. No additional training was provided the recorder. Each team was given 35 min to complete the design review. In addition to the design variable of communication method, other variables may inﬂuence the given scenarios and must be controlled or considered in the data analysis. Presenting an identical design to each team controlled the design problem variable. The observers regulated the time duration of the review. Organizing the teams into groups of uniform size controlled team size (either ﬁve or six students per team). Communication resources were controlled and regulated by graduate observers according to the scenario deﬁnitions provided earlier. A checklist was provided to each team member to promote a consistent design review methodology and documentation strategy. The experience variable was addressed by engaging all participants in a practice session following an introduction to design reviews yet before the experimental exercise. A team administration was partially imposed by randomly assigning a participant in each group to the role of recorder. 3.3
The data from the study consist of a record, compiled by the team recorder, of the design inadequacies found during the review period. A panel of judges, graduate design students conducting the exercise, evaluated each team’s checklist to determine whether a problem had been identiﬁed by the review team. As terminology used varied from team to team, the panel collectively developed a consensus view of the intent of each team’s recorder.
K. J. Ostergaard et al.
n rsio e v d inte nline r p o or no f lour for o M Co
Team completed design-review checklist.
A sum of the identiﬁed problems and a weighted total of the identiﬁed problems were used as the basis for comparison of the impact of communication method on the design review eﬀectiveness. An important justiﬁcation for the choice of user-generated data as the basis for analysis is that it is objective and quantitative, whereas other data, such as
Communication in collaborative design review meetings Table 3.
Sample identiﬁed errors in ﬁgure 2.
C D E F G H I J
Functional Dimension Dimension BOM BOM Material Material Ergonomics
Jaw surfaces are of unsuitable design for gripping Lever arm to jaws is longer than lever arm to the end of the handle—loss of mechanical advantage Jaws do not contact each other when handles are closed Tolerance issue with pin and lever through hole Inconsistent use of units Quantity error in bill of materials Cost exceeds constraint Inappropriate material speciﬁed for handle Inappropriate material speciﬁed for pin Handles are not of ergonomic design
self-reported user eﬃcacy, team measured group cohesiveness, or observer recorded cognitive processes are both subjective to the interpretation of the observer and qualitative in nature. Further, the goal of this study is to improve the outcome of the collaborative design review activity by identifying the variables that are most signiﬁcant and inﬂuential with respect to the outcomes. Thus, analysing the documents representing the outcome, lists of design inadequacies, an understanding is derived that directly relates to the stated objective of improving performance. All documents marked up by any participant were collected for use in the data analysis to augment the checklist and to ensure that cross-contamination between the two replications would be minimized. The two replications were conducted in two sections of the same course on the same day, but at diﬀerent times. Each graduate observer recorded basic information concerning the behaviour and progress of each team on an evaluation form. To supplement the data collected via the design review user study, each participant completed a survey regarding such items as perceived eﬀectiveness of the design review, perceived limitations to progress, level of participation allowed, and previous design review experience. While subjective and qualitative, these surveys provided some understanding of users’ perception of performance, with the intent to guide the development of future experiments. 3.4
The design ﬂaws recorded by each team recorder were compared with the problem decomposition of table 2. Corresponding problem counts and weighted problem totals were calculated for each team. These results are summarized in table 4. The problem types include: Functional (F), Dimensional (D), Bill of Materials (B), Material (M), and Ergonomic (E). While no mode of communication is found to be signiﬁcantly diﬀerent from the other modes, based upon a small sample size of only two replications per mode, some design ﬂaws were identiﬁed by only one or two teams, while other ﬂaws were identiﬁed by all teams. The inadequacies which were recognized by two or fewer groups are (F5, F6, F7, and D3) highlighted in table 4. No single mode of communication or team identiﬁed all four of these ﬂaws. This suggests that the type of ﬂaw also does not immediately correlate to the type of ﬂaw identiﬁed. These design inadequacies reported do not show a clearly advantageous communication method to achieve design review eﬀectiveness. The mean for each of the modes of
K. J. Ostergaard et al.
Evaluation of design review teams’ problem lists against problem decomposition. Speech only
ID F1 F2 F3 F4 F5 F6 F7 D1 D2 D3 B1 B2 M1 M2 E1 Unweighted totals Weighted totals Weighted total m Weighted total Unweighted total m Unweighted total
4 4 4 9 9 9 4 9 9 9 1 1 4 4 1
6 5 3 5 1 2 2 5 5 2 4 5 6 6 6
54.0 11.314 11 1.414
10 41 48.0 9.899 10.5 0.707
8 49 51.5 3.536 10 2.828
communication falls within the standard deviation range of each of the other modes of communication for both the weighted and unweighted data sets. Further, there was no clear correlation between the ability to identify problems in particular categories or problems of a particular diﬃculty and the communication method. Additional runs, or replications using diﬀerent design teams, of each scenario would need to be conducted in order to identify distinct trends in these areas. The ability to recognize the design problems may correlate more with the team composition and ability to adapt to available resources than the communication restrictions. Records (observer notes, video, audio, and text logs, and marked non-list documents) show that the speech-only and text-only teams in both replications discussed design inadequacies that did not appear on the recorders’ ﬁnal lists. For these teams, this miscommunication could be attributed to the ineﬀectiveness of the communication method, the recorders’ training, the method of documentation, or the motivation of the team. Note, however, that one face-to-face communication team did not record one problem that was discussed during the review. If these problems had been recorded, the new totals would be as shown in table 5. These results are provided for completeness and indicate the need for further investigation on the recording mode to determine if checklists are appropriate for this type of design review session. 3.5
As the focus of this paper is to present a systematic methodology for studying communication modes in collaborative design review sessions, an example of the type of
Communication in collaborative design review meetings
required statistical analysis is provided. The results from this statistical analysis do not provide signiﬁcant insights into this speciﬁc study with only two replications of data points. For completeness, the analysis is provided here. A single-factor ANOVA (table 6) was also conducted as the same individuals were not used for each of the communication types, though all participants were drawn from a homogeneous sample pool. The single-factor ANOVA is an extension of the two-sample t-test in which data are compared for which only one factor has changed (therefore, only one analysis needs to be conducted). In this case, the single factor altered is the communication method. In addition, since these are human subjects the level of signiﬁcance was increased to p_critical ¼ 0.2 for a conﬁdence level of 80%. Analysis was done to examine the eﬀects of communication mode on seven diﬀerent views of the data collected. None of these resulted in any calculated p value less than the identiﬁed p_critical. Thus, none of the tests indicate any signiﬁcant inﬂuence between the communication modes. Unfortunately, the replication size of two limits the statistical relevance of this analysis. Thus, additional experimentation is required.
4. Observations 4.1
Research studying engineering student design project teams indicates that the composite personality type of the group does not have a signiﬁcant inﬂuence on the performance of the team (Varvel et al. 2004). Thus, to supplement the data collected via the design review user study, each participant completed a personality survey which provided information such as level of extroversion or introversion, sensing or intuition preferences, preferences
Free Speech Text Free Speech Text
Results considering unrecorded problems.
Quantity (only documented)
Quantity (including undocumented)
Weighted Complete Total
12 10 11 8 12 10
13 12 12 8 13 11
1 2 1 0 1 1
63 54 64 49 63 45
Table 6. Test Total errors (weighted) Total errors (not weighted) Function errors (weighted) Function errors (not weighted) Dimension errors (weighted) Dimension errors (not weighted) Bill of materials (not weighted)
ANOVA tests on collected data. Calculated p
0.815248 0.872443 0.786806 0.877642 0.760726 0.760726 0.603682
Fail Fail Fail Fail Fail Fail Fail
K. J. Ostergaard et al.
for thinking versus feeling, and inclinations toward judgement versus perception (Bayne 1995). Relative to the personality survey, the compositions of most of the teams were fairly uniform. It should be noted that the most productive team, the speech-only team in Run 2, was heavily extroverted and tended to perceive information intuitively according to the survey. The extroversion factor may have aided in their ability to process the information collectively rather than individually. The intuitive nature of the team suggests that they may have converted direct sensory input into more complex concepts, allowing them to identify a high number of the heavily weighted, more diﬃcult problems. While personality eﬀects on group decision-making eﬀectiveness was not the focus of this research, the data collected, as with the face-to-face communication team in Run 2 being heavily introverted and having the fewest number of design ﬂaws identiﬁed, do indicate that future investigation is warranted. 4.2
The time of preparation or delay before communication commenced varied among the scenarios. For the two face-to-face communication teams, the delay times were negligible in Run 1 and 1 min in Run 2. The speech-only teams waited 4 and 3 min, respectively, in Runs 1 and 2 before initiating communication. The text-only team in Run 1 waited 10 min before initiating the chat session, but the text-only team in Run 2 had a negligible delay. This range in time before initial communication may relate to a variance in level of comfort or security with the three communication methods. 4.3
The perceived eﬀectiveness of the design reviews varied across the runs. All of the participants in the face-to-face communication runs felt that they identiﬁed every problem with the design. This false sense of achievement may be attributed to the level of comfort the team members had in the design review. In general, the participants in this scenario felt that communication methods did not inhibit the progress of the review. Eighty per cent of the speech-only participants felt that they had identiﬁed all of the design problems. Participants in this scenario, though, did feel limited by not being able to communicate with gestures or share drawings. In the text-only scenario, just 20% of the participants felt that they identiﬁed all of the design problems. Feedback from most participants in this scenario declared that using text chat limited their ability to communicate and identify problems with the design. Speciﬁcally, the need to type an explicit explanation of problems delayed communication. Participants also remarked on the delay caused by having to edit their own messages before broadcasting after reading others’ messages. One participant noted that rules of communication would facilitate a more eﬀective review. Records of the text session show that the teams experienced problem ﬁxation. That is, they discussed a single issue repeatedly and at length. Even when team members suggested that a particular topic had been suﬃciently discussed and proposed moving on to a new topic, focus was not easily moved from some problems. 4.4
Group vs. individual
While the focus of this experiment is to study the eﬀects of communication mode on collaborative design reviews, a secondary study was conducted using individuals as a
Communication in collaborative design review meetings
control population to determine eﬀects of group vs. individuals in identifying design ﬂaws. This secondary analysis was done to verify that the design problem and documents provided to the groups in the experiment were in fact suﬃciently complicated so that identifying the ﬂaws required more than a single person. If the design problem was such that an individual performed at the same eﬀective level as a group, then the results of the group would not be indicative of group performance under the diﬀerent modes of communication, but rather the results would be derived from a single team member’s performance. Thus, nine individual design reviews were conducted, using the same participant pool, shown in table 7. Only one individual’s weighted total of 44 exceeded that of the poorest performing group, the text-only team in Run 2 (41). Also, three individuals matched the quantity of problems (not the weighted total) identiﬁed by the free team in Run 2. Aside from these anomalies, the individuals did not identify as many problems or achieve weighted totals as high as the design review teams. On average, the teams identiﬁed 42.9% more problems and achieved a 52.5% higher weighted total. Based upon this veriﬁcation study, it is felt that the design problem was of suﬃcient magnitude to require group participation. The limitation of having a sole reviewer with a smaller body of expertise than that of an entire team clearly impacted the eﬀectiveness of the design review. In follow-up surveys, several of the individuals expressed that it was diﬃcult to conduct a review without the input and feedback of others. This supports the proposition that teamwork exceeds the potential of individual work (Hammond 2001). For instance, group work allows (1) errors and ﬂawed suggestions to be checked, (2) the ablest member to have greater inﬂuence, (3) the most conﬁdent member to have social inﬂuence, (4) greater focus on the task due to group membership, and (5) a greater amount of information or mental resources available. The use of organized methods, such as following the design review checklist or PDS, seemed to aid in identifying more problems in all scenarios. The advantages to be gained from organization may extend to the area of communication. As suggested by one participant, rules of communication may be needed to enable full participation and eﬃcient discourse. These optimal organization methods and rules may be applied to design review communication tools intended for use by distributed designers.
Ind. 1 Ind. 2 Ind. 3 Ind. 4 Ind. 5 Ind. 6 Ind. 7 Ind. 8 Ind. 9 Average ¤ Max Min
Individuals’ review results. Quantity
4 2 8 6 7 6 8 8 5 6.0 2.1 8.0 2.0
10 8 36 18 30 23 44 31 19 24.3 11.9 44.0 8.0
K. J. Ostergaard et al.
5. Conclusions It is necessary for designers with the appropriate expertise to communicate during the product development process, notably during design reviews. Since this expertise may be distributed across diﬀerent geographic locations of an organization, these teams are facing new challenges in eﬀective communication. The communication methods prescribed in this study were chosen to simulate current technologies commonly used in situations of geographic or organizational distribution. The investigation tested the hypothesis that available communication methods impact the eﬀectiveness of design reviews. While this hypothesis could not be proven, other ﬁndings were made. Notable results from the study include the following: group design reviews were approximately twice as eﬀective as individual design reviews; face-to-face communication produced a greater perceived eﬀectiveness than speech-only communication, and speech-only communication produced a greater perceived eﬀectiveness than text-only communication. Certain personality factors, such as extroversion and intuition, may have contributed to a higher productivity in design review teams, though this impact was determined outside the scope of this investigation. The use of organized methods aided in identifying more problems irrespective of the communication method. Rules of communication and organization may be needed to enable full participation and eﬃcient discourse in distributed design reviews. Optimal organization methods and rules may be applied to design review-communication tools to facilitate more eﬀective distributed design reviews. Most signiﬁcantly, this paper has presented a methodology for how to investigate the eﬀects of communication methods on design review eﬀectiveness. This method is demonstrated by conducting a limited exercise with undergraduate mechanical engineering students. Important aspects of the methodology include the calibration study using graduate students, design of data collection that is objective and quantiﬁable, and development of a design problem that requires group reviews capturing more ﬂaws than individuals. Future experiments may be developed based upon this systematic approach to research on design-review eﬀectiveness, which in turn will lead to a better understanding of design as a social activity. References
Adelson, B., Uncovering how designers design. Res. Eng. Design, 1989, 1(1), 35 – 42. Bayne, R., The Myers – Briggs Type Indicator: A Critical Review and Practical Guide, 1995 (Chapman & Hall: New York). Bordia, P., Face-to-face versus computer-mediated communication: a synthesis of the experimental literature. J. Bus. Commun., 1997, 34(1), 99 – 120. Brewerton, P. and Millward, L., Organizational Research Methods, 2001 (SAGE: London). Carletta, J., The eﬀects of multimedia communication technology on non-collocated teams: a case study. Ergonomics, 2000, 43(8), 1237 – 51. Case, M. and Lu, S., Discourse model for collaborative design. Comput. Aid. Design, 1996, 28(5), 333 – 45. Chansler, P., Swamidass, P. and Cammann, C., Self-managing work teams: an empirical study of group cohesiveness in ‘natural work groups’ at a Harley-Davidson Motor Company plant. Small Group Res., 2003, 34(1), 101 – 20. Chiu, M., An organizational view of design communication in design collaboration. Des. Stud., 2000, 23(2), 187 – 210. Clements, P., Active reviews for intermediate designs, In pp. 15, 2000 (Carnegie Mellon University: Pittsburgh, PA). Costa, R. and Sobek, D., How process aﬀects performance: an analysis of student design productivity, in Design Engineering Technical Conferences, 2004, DETC2004, pp. DTM-57274.
Communication in collaborative design review meetings
Cotnareanu, T., Old tools—new uses: equipment FMEA, a tool for preventative maintenance. Qual. Prog., 1999, 32(12), 48 – 52. Daily, M., Howard, M., Jerald, J., Lee, C., Martin, K., McInnes, D. and Tinker, P., Distributed design review in virtual environments, in ACM Conference on Collaborative Virtual Environments 2000, 2000, 3, pp. 57 – 63. Dixon, J. and Poli, C., Engineering Design and Design for Manufacturing, 1995 (Field Stone: Conway, MA). Garner, S., Comparing graphic actions between remote and proximal design teams. Des. Stud., 2001, 22(4), 365 – 76. Griﬃn, E., Groupthink of Irving Janis. In A First Look at Communication Theory, edited by 1997 (McGraw-Hill: St. Louis, MO). Hammond, J., Distributed collaboration for engineering design: a review and reappraisal. Hum. Factors Ergon. Manuf., 2001, 11, 35 – 52. Hilts, S., Johnson, K. and Turoﬀ, M., Experiments in group decision-making: communication process and outcome in face-to-face versus computerized conferences. Hum. Commun. Res., 1986, 13, 225 – 52. Kirschman, C.F. and Fadel, G., Customer metrics for the selection of generic forms at the conceptual stage of mechanical design, in Design Engineering Technical Conferences, 1997, DTM, pp. 3877. Kirschman, J. and Greenstein, J., The use of groupware for collaboration in distributed student engineering design teams. J. Eng. Educ., 2002, 91(4), 403 – 8. Lees, B., Branki, C., and Aird, I., A framework for distributed agent-based engineering design support. Autom. Constr., 2001, 10(5), 631 – 37. Lim, L. and Benbasat, I., The debiasing role of group support systems: an experimental investigation of the representativeness bias. Int. J. Hum. Comput. Stud., 1997, 47(3), 453 – 71. Lurey, J. and Raisinghani, M., An empirical study of best practices in virtual teams. Inform. Manag., 2001, 38(8), 523 – 44. McKoy, F., Vargas-Hernandez, N., Summers, J.D. and Shah, J.J., Experimental evaluation of engineering design representations for idea generation, in Design Engineering Technical Conferences, 2001, DETC-2001, pp. DTM-21685. Olson, J. and Olson, G., Distance matters. Hum. Comput. Interact., 2000, 15(2 – 3), 139 – 78. Orengo Castella, V., Zornoza Abad, A.M., Prieto Alonso, F. and Peiro Silla, J., The inﬂuence of familiarity among group members, group atmosphere, and assertiveness on unihibited behavior through three diﬀerent communication media. Comput. Hum. Behav., 2000, 16, 141 – 59. Pahl, G. and Beitz, W., Engineering Design: A Systematic Approach, 1996 (Springer: New York). Parnas, D. and Weiss, D., Active design reviews: principles and practice, in 8th International Conference on Software Engineering, 1985, pp. 132 – 136. Pugh, S., Integrated Methods for Successful Product Engineering, 1991 (Addison-Wesley: London). QRAM, Design Reviews, 2001. Available online at http://www.qram.com/Reviews.htm (accessed 13 July 2005). Sater-Black, K. and Iverson, N., How to conduct a design review. Mech. Eng., 1994, 116, 89 – 93. Shah, J.J., Vargas-Hernandez, N., Summers, J.D. and Kulkarni, S., Collaborative sketching (C-Sketch): an idea generation technique for engineering design. J. Creat. Behav., 2001, 35(3), 168 – 98. Simoﬀ, S. and Maher, M., Analysing participation in collaborative design environments. Des. Stud., 2000, 21, 119 – 44. Smolensky, M.W., Carmody, M.A. and Halcomb, C.G., The inﬂuence of task type, group structure and extraversion on uninhibited speech in computer-mediated communication. Comput. Hum. Behav., 1990, 6, 261 – 72. Teng, S., Failure mode and eﬀects analysis: an integrated approach for product design and process control. Int. J. Qual. Reliab., 1996, 14(5), 8. Ullman, D., The Mechanical Design Process, 2003 (McGraw-Hill: New York). Varvel, T., Adams, S., Pridie, S. and Ruiz Ulloa, B., Team eﬀectiveness and individual Myers – Briggs personality dimensions. J. Manag. Eng., 2004, 20(4), 141 – 6. Wetmore, W., PRSM: proper review selection matrix. Masters thesis, Clemson University, 2004. Wetmore, W. and Summers, J.D., Group decision making: friend or foe, in International Engineering Management Conference (IEMC2003), 2003. Wetmore, W. and Summers, J.D., Inﬂuence of group cohesion and information sharing on eﬀectiveness of design review, in Design Engineering Technical Conferences, 2004, DETC-2004, pp. DAC-57509. Wierba, E., Finholt, T. and Steves, M., Challenges to collaborative tool adoption in a manufacturing engineering setting: a case study, in HICSS-35, 2001.
CoDesign Typeset by Elite Typesetting for
NCDN Manuscript No.
Editor Master Publisher
QUERIES: to be answered by AUTHOR AUTHOR:
The following queries have arisen during the editing of your manuscript. Please answer the queries by marking the requisite corrections at the appropriate positions in the text. QUERY DETAILS
I have added a cross-reference for table 6. Please check.
There were two "table 1"s, so I have changed the 2nd table 1 to table 7. Is this correct? Sentence "Thus, analysing the documents representing the outcome. . ." does not make sense. Please check and reword. "p_critical" – Do you mean "pcritical"? Please check throughout.
Clements (2000) reference – Please complete the missing details.
Ditto Griffin (1997) reference.
McKoy et al. and Wetmore & Summers references – Please clarify page range. Table 7: "¤" – What should this symbol be?
9 10 11
Figure 1: "RADI"– Do you mean "RADII"? If so, please amend and supply revised artwork. Figure 2: Please straighten and provide revised artwork.
Please check that the history dates are correct.