K:\SagePub\HigherEd\Prod\Jrnls\Our Documents

0 downloads 0 Views 179KB Size Report
SD community, there is the principle that system-dynamics-based modeling and simu- ...... Introduction to system dynamics modeling with dynamo. Cam-.
Simulation & Gaming http://sag.sagepub.com

Enhancing Learning Capabilities by Providing Transparency in Business Simulators Andreas Größler, Frank H. Maier and Peter M. Milling Simulation Gaming 2000; 31; 257 DOI: 10.1177/104687810003100209 The online version of this article can be found at: http://sag.sagepub.com/cgi/content/abstract/31/2/257

Published by: http://www.sagepublications.com

On behalf of: Association for Business Simulation & Experiential Learning International Simulation & Gaming Association Japan Association of Simulation & Gaming

North American Simulation & Gaming Association Society for Intercultural Education, Training, & Research

Additional services and information for Simulation & Gaming can be found at: Email Alerts: http://sag.sagepub.com/cgi/alerts Subscriptions: http://sag.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

SIMULATION Größler et al. / ENHANCING & GAMING / June LEARNING 2000 CAPABILITIES

Enhancing learning capabilities by providing transparency in business simulators Andreas Größler Frank H. Maier Peter M. Milling Mannheim University

Prefabricated computer-based simulations usually offer a user-friendly interface. This allows inexperienced users fast access to the simulation because they do not have to possess specific knowledge about simulation techniques. Thus, giving simulation models an easy-to-use interface increases the acceptance of the simulation tool and draws attention to it. Learners are not only able to examine the results of their decisions but also the causes of these results using powerful system dynamics diagramming techniques. This adds transparency to the former black-boxes, producing so-called transparent-box business simulators. This article reports on an experiment evaluating the relevance and effects of structural transparency. This experimental design also can be used to examine other types of business simulators. Hypotheses regarding the effectiveness of transparency were tested. Results show the necessity for further research and collaboration. KEYWORDS:

business simulators; transparency; structural information; effectiveness of simulation/game; system dynamics; evaluation.

Computer simulation models are powerful tools to support problem solving and learning processes. In particular, the iterative process of modeling and simulating—which leads to a simulation model, mapping the problem structure adequately and showing the behavior modes of reality—is important in promoting the understanding of complex systems. Partly because modeling and simulation require significant expertise, business simulators have been developed to allow easier access to a specified model and simulation through the implementation of a user-friendly interface. Although these devices are to a large extent designed as black-box simulators, they enable the user to experience the dynamics created by his or her policies and decisions and therefore facilitate learning about and understanding of complex systems. However, black-box simulators do not provide direct insight into the problem structure. There is no information about the internal feedback structure of the model available, and the users usually have not participated in the process of model development. Hence, black-box simulators are assumed to be of limited effectiveness and efficiency in supporting the learning and problem-solving capabilities. For several years, the idea of transparent-box simulation/gaming has been discussed in the system dynamics (SD) community. Adding features to provide structural AUTHORS’ NOTE: Andreas Größler wishes to express his thanks to Pål Davidsen and J. Michael Spector for their help and encouragement in an earlier stage of the research project reported here. SIMULATION & GAMING, Vol. 31 No. 2, June 2000 257-278 © 2000 Sage Publications, Inc.

257

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

258

SIMULATION & GAMING / June 2000

information about the underlying model could be a means to combine the advantages of user-friendly simulators with the power of model-building and analysis tools. Learners are not only able to examine the results of their decisions and experience dynamics; they can, for example, trace the causes of these results using SD diagramming techniques. This introduces transparency to the former black-boxes, producing so-called transparent-box business simulators. The assumption is that providing structural information about the system being modeled will lead to further and improved understanding of the problem and the system under consideration and consequently will lead to better decisions. Because little research has been done investigating this assumption, we have designed and conducted an experiment to evaluate the relevance and the effects of providing structural transparency to users of business simulators. Information about the feedback structure of the model underlying a particular business simulator can be presented to the user in a broad variety of ways, with simple slide-based lectures preceding or accompanying the use of the simulator on one side and highly sophisticated interactive help facilities embedded in the business simulator on the other. Thus, our experimental design covers both ends of the spectrum, including • a treatment group receiving structural information through a short lecture and an online help facility available while working with the simulator, • a treatment group receiving structural information only through the help facility, • a treatment group getting structural information only through the lecture, and • a control group that receives none of the above described structural information while working with the simulator.

All four groups are examined with a knowledge test conducted preceding and following the work with a comprehensive business simulator. The results of the knowledge test as well as the results the users achieve in different scenarios of the simulator are used to evaluate our hypotheses about the effectiveness of providing structural information. The results of implementing this design are to some extent promising but also show the necessity for further research.

Transparent-Box Business Simulators as a Middle Course Between Black-Boxes and Building Models From Scratch Companies are complex and dynamic systems that have to be managed; information about the relationship between structure and behavior of the system has to be analyzed, policies must be designed, and decisions have to be made and then transformed into actions. However, due to the complexity and the dynamics of the system called “company,” effective management is a very difficult challenge. Therefore, methods, techniques, tools, and theories are needed to make management more effective. In the SD community, there is the principle that system-dynamics-based modeling and simulation can enhance the understanding of complex systems and improve decision making.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

FIGURE 1:

259

Insight, Learning, and Understanding Through Modeling and Simulation

This principle is represented by the process shown in Figure 1. It starts with the definition of the problem and proceeds through modeling and simulation toward an implementation of a solution. The most important activity is iterative modeling, that is, the identification of feedback loops and the explicit formulation of the mathematical equations representing the system, as depicted in the lower portion of the modeling process box. This procedure allows the identification of underlying problem structures. Simulating the model shows its behavior, and its analysis usually points out inadequate or overlooked representations of the model structure and leads to revisions of the model. Problem analysis, mapping of structural elements, and modeling without simulation, however, are insufficient for promoting understanding because it is the connection of structure and behavior that leads to understanding complex dynamic systems (Dörner, 1992; Forrester, 1994; Richardson & Pugh, 1983). Generating the behavior of a system—to simulate—is a prerequisite for learning and understanding in complex systems (Sterman, 1994). Therefore, the process represented in Figure 1 in total allows insight into the relationship between structure and behavior of a system. Connecting structure with behavior makes the development, testing, and implementation of improved structures, policies, and decisions possible (Forrester, 1994). Simulations allow experimentation without being confronted with real-world consequences. Simulation makes experimentation possible and useful when in the realworld situation such experimentation often would be too costly or—for ethical reasons— not feasible or where the decisions and their consequences are too broadly separated in time. Other reasons for the use of simulations are the possibility to replicate the initial situation and the opportunity to investigate extreme conditions without risk (Milling, 1996; Pidd, 1993). The effectiveness of a problem-solving and learning process comprising model building and simulation analysis is widely accepted within the SD community (de Geus, 1988; Milling, 1991; Morecroft, 1994). Not only the model itself but

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

260

SIMULATION & GAMING / June 2000

FIGURE 2:

Hiding the Formal Model Within Business Simulators

the complete process of building it is important in promoting understanding of complex systems (Lane, 1995). However, partly because modeling and simulation requires substantial expertise, business simulators have been developed to allow easier access to a specified model and simulation and to facilitate learning and understanding about complex systems (Bakken, Gould, & Kim, 1994; Diehl, 1992; Goodman, 1994; Kreutzer, 1994). The business simulator approach is considered as a tool to fundamentally improve the understanding of complex and dynamic systems, that is, the relationship between 1 structure and behavior. Several studies have been concerned with problems of using management games and business simulators effectively. They analyzed the extent to which these tools can improve learning and understanding and how such learning laboratories should be designed (Bakken et al., 1994; Lane, 1995; Paich & Sterman, 1993; Senge & Sterman, 1992). Nevertheless, the business simulator approach only covers part of the modeling and simulation process described in Figure 1. The use of management flight simulators2 misses the iterative steps from the problem definition to the formulation of causal diagrams and mathematical models. For this reason, there have always been doubts about learning induced by simulation tools that are designed as shown in Figure 2 (Machuca, 1990; Paich & Sterman, 3 1993). For these, the business simulator is a black-box, that is, only input and output of the model are observable. There are strategies for how to deal with black-boxes effectively and there are many everyday situations that can be described as dealing with black-boxes (for a general discussion of black-boxes and other examples, see, e.g., Ashby, 1958). Everyday life shows that learning does occur in black-box environments. One learns to drive a car (“know how”) by giving some input (using the steering wheel, the pedals) and observing the output (the speed and direction of the car). However, when it comes to understanding a system (“know why”), to adapt or improve it, the learning initiated by black-boxes is neither sufficient nor efficient. There is some

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

261

kind of knowledge that is easy to learn by using black-boxes, whereas other knowledge is not. For this distinction of action-related knowledge versus structural knowledge, see Niegemann and Hofer (1997), who conducted various experiments with computer-based training programs. In this article, we are interested in structural knowledge, the “know why.” To find a starting point for enabling business simulators to mediate structural knowledge, one has to consider the special advantage that systems theory, and in particular SD, can offer business simulators. How is the user able to profit from the tools and techniques SD provides if he or she does not see an SD-based model at all? How can a user be sure a model is valid (Pidd, 1993) if there is no chance to inspect it? The importance of SD models lies in the clear conceptualization of model structure and the linkage to model behavior (Machuca, 1992). SD helps to develop conceptualization skills by providing tools to relate system structure to behavior, such as causal-loop diagrams, stock-and-flow diagrams, policy structure diagrams, and simulation outcomes (Graham, Morecroft, Senge, & Sterman, 1994). If there is no access to the model structure, then SD just becomes equivalent to other tools for an internal representation of data structures in a computer program. In fact, there are many business simulations that do not use SD for building the underlying model, even though the mathematical presentation is similar or identical to an SD model. But how can a positive learning effect be achieved in an efficient way if one black-box (business simulator) represents another black-box (Sterman, 1994)? Adding features to provide structural information about the underlying model could be a means of combining the advantages of user-friendly simulators with the power of model-building and analysis tools, which are supposed to provide structural insight. When dynamic complexity is high, outcome (behavior) feedback alone is not effective (Paich & Sterman, 1993). Users have to be able not only to examine the results of their decisions but also the causes of these results, providing the possibility for reflection, building truly reflective computer-based learning environments (Isaacs & Senge, 1994; Machuca, 1991). This introduces transparency to the former black-boxes, pro4 ducing so-called transparent-box business simulators (Machuca & Carrillo, 1996). Users are enabled to inspect, to criticize, or even to advance and improve the underlying model. At the same time, their mental models are challenged when comparing them to the formal model (see Doyle & Ford, 1997, on the problems of the term “mental model” and a suggested definition). In this way, the mental models could be changed and 5 improved as well (Senge & Sterman, 1992). Thus, although there are areas where transparency is not wanted, for example, in personnel selection, transparent-box business simulators can be effective simulation tools to support learning processes. In the field of SD, the idea of providing transparency in business simulators is based on the work of Machuca, Ruiz, Domingo, and González (1998) in Seville; additional research was done by Langley (1995) in London. Research projects at Bergen (Davidsen & Spector, 1997; Spector & Davidsen, 1997) and Mannheim (Größler, 1997, 1998b) explore concepts and possibilities of emphasizing the relationships between structure and behavior in interactive learning environments.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

262

SIMULATION & GAMING / June 2000

Our hypothesis, that traditional SD modeling and simulating lead to an improved understanding of system behavior and to more effective decision making, raises doubts about the effectiveness of learning with black-box business simulators because they do not provide insight into the underlying model structure. Hence, the use of transparent-box business simulators could be a suitable approach to facilitate learning in and about the dynamics of complex systems, without giving up easy access to the model through an intuitive user interface.

Implementing Transparency in a Business Simulator In this section, some practical issues about structural feedback are discussed:6 How far is feedback about structural information already realized in a given business simulator? How can it be added or extended? Which forms of feedback seem to be useful? As a basis for this research, LEARN!, a German business simulator, is used, which was developed in recent years at the Industrieseminar of Mannheim University. LEARN! is a rather complex management flight simulator. Decisions have to be made in most fields of corporate management, for example, finance, personnel, production, and marketing. Furthermore, LEARN! deals with the processes of research and development, problems of time-to-market and time-to-volume, and the diffusion of new products (Milling & Maier, 1997). As in many management games, the major goals of LEARN! are to ensure the survival of the company and to make profit. The following steps were taken to provide structural information to the users of the business simulator LEARN!: 1. Three main areas of importance within LEARN! were identified as the focus of structural information (presentation and help function): market demand and sales, relationship between market demand and costs, and restrictions on financial liquidity. 2. A 45-minute lecture containing material about the important feedback loops within the model and a short section about the method used for users not familiar with causal-loop diagramming was provided. It is constructed in a linear sequence. The presentation transparencies used in the lecture could easily be converted into a computer-based presentation for use without a facilitator or presenter. This presentation could be combined with LEARN! and be displayed automatically whenever it is started. A possibility to stop the presentation and start playing immediately could then be provided. See Figure 3 for an example screen. 3. LEARN! already offered a global help function that contains information about the general model structure. For the three most important areas of the simulation game, a particular structural help function was implemented that displayed special information about the model structure. For this purpose, various help pages were created, which mainly consisted of causal-loop diagrams as a form of graphical representation of system structure. By clicking on a variable or a relation between two variables, the user is able to get more information about it. An example help screen is shown in Figure 4.

For Tasks 2 and 3, data already available from the Vensim-based simulation model were used (causal diagrams).7 Nevertheless, the material needed additional work to make it more readable.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

FIGURE 3:

Example Screen of Presentation About System Structure (originally in German)

FIGURE 4:

Example Screen of Help Function (originally in German)

263

Effectiveness of Transparent-Box Business Simulators General Hypotheses About Effectiveness of Transparency in Business Simulators To measure the relevance and effectiveness of structural information (transparency condition), the following four hypotheses were formulated:

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

264

SIMULATION & GAMING / June 2000

FIGURE 5:

Groups in Experiment

Hypothesis 1a: Individuals who get structural information perform better in controlling a business simulator than those who do not get it. Hypothesis 1b: Individuals who get structural information in two ways (presentation and help function) perform better in controlling a business simulator than those who only get one form of structural information. Hypothesis 2a: Individuals who get structural information learn more about system structure and its implications on system behavior than those who do not get it. Hypothesis 2b: Individuals who get structural information in two ways (presentation and help function) learn more about system structure and its implications on system behavior than those who only get one form of structural information.

To test these hypotheses in an experiment, the independent variables must systematically be varied (Zimbardo, 1992). For a discussion about the usefulness of experiments with complex systems, see, for instance, Kluwe (1993). Thus, a 2 × 2 experimental design with four experimental groups is necessary: two groups of participants who receive the presentation about the feedback relationships, two groups who do not receive the presentation about the feedback relationships, two groups who have access to structural help functions during the playing of the game, and two who do not have access to structural help functions during the playing of the game. An Experimental Design to Measure the Effectiveness of Transparency In the actual experiment, the following design was used (see Figure 5). In particular, the conditions of Treatment Group 3 (presentation about system’s structure) were tested several times before with insignificantly positive results (Putz-Osterloh, 1993; Süß, 1996; Süß, Beauducel, Kersting, & Oberauer, 1992). In fact, participants were quite good in reproducing structural information provided but they were not able to improve their system control competence. On the other hand, experiments conducted by Langley (1995), who used online cognitive feedback, that is, a sort of help function, similar to that used in Treatment Groups 1 and 2, showed effects concerning participants’ productivity measured as performance ÷ time needed.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

265

FIGURE 6: Design of Experiment NOTE: tg1 = Treatment Group 1, tg2 = Treatment Group 2, tg3 = Treatment Group 3, cg = control group, O1 = preknowledge test, O2 = postknowledge test, X1 = presentation about system structure, and X2 = structural help function.

The experiment was composed of the following: 1. an introduction to using LEARN!, 2. a preknowledge test, 3. a lecture for the control group and one treatment group about ratios and ratio systems and a presentation about the model structure for the other treatment groups, 4. three game runs with different scenarios, and 5. a postknowledge test.

This type of experimental design is known as an untreated control group design with pretest and posttest (Cook & Campbell, 1979). In Figure 6, Oi stands for observation i, and Xj stands for treatment j. Thus, O1 symbolizes the preknowledge test, O2 the postknowledge test, X1 the presentation about system structure, and X2 the structural help function. The figure again shows the differences between the three experimental groups. The experiment starts at the left-hand 8 side and advances with time to the right. Table 1 shows the timetable of the experiment. During instruction, all groups were urged to use the possibility of getting information while playing the game (by a help function). Structural help was available in all scenarios in Treatment Groups 1 and 2; the control group and Treatment Group 3 could access the normal help function of LEARN!. The actual use of the help function was measured by a program that recorded every call of a single help page. At the beginning of the experiment, two goals were made clear to the participants: to obtain a good result when controlling the simulation scenarios and to learn as much as possible about system structure and behavior. The participants in Treatment Groups 1 and 3 received the mentioned presentation about system structure. In addition, the participants had to complete short exercises concerning the three main areas of importance in the prescenario intervention. They should turn declarative knowledge into procedural knowledge, which is supposed to be responsible to control successfully complex systems. The groups receiving the

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

266

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

TABLE 1: Timetable of Experiment Duration (min) 10 30

30 45

45 45 45 30

Control Group Biographical and business knowledge questionnaire Instructions about using simulator (including help function) and goal of simulation play Preknowledge test Intervention (considered useless): lecture about ratios and ratio systems 1. Scenario: equal starting points 2. Scenario: unequal starting points 3. Scenario: threat of bankruptcy Postknowledge test

Treatment Group 1

Treatment Group 2

Biographical and business knowledge questionnaire Instructions about using simulator (including help function) and goal of simulation play

Biographical and business knowledge questionnaire Instructions about using simulator (including help function) and goal of simulation play

Preknowledge test Intervention: lecture and exercise about feedback structure of main game areas 1. Scenario: equal starting points (with structural help) 2. Scenario: unequal starting points (with structural help) 3. Scenario: threat of bankruptcy (with structural help) Postknowledge test

Preknowledge test Intervention (considered useless): lecture about ratios and ratio systems 1. Scenario: equal starting points (with structural help) 2. Scenario: unequal starting points (with structural help) 3. Scenario: threat of bankruptcy (with structural help) Postknowledge test

Treatment Group 3 Biographical and business knowledge questionnaire Instructions about using simulator (including help function) and goal of simulation play Preknowledge test Intervention: lecture and exercise about feedback structure of main game areas 1. Scenario: equal starting points 2. Scenario: unequal starting points 3. Scenario: threat of bankruptcy Postknowledge test

Größler et al. / ENHANCING LEARNING CAPABILITIES

267

useless lecture also had to fulfill short exercises. Participants in Treatment Groups 1 and 2 played LEARN! with the structural help function. The experiment was conducted with graduate students who could select one of the four groups without knowing the differences between them. There were 14 persons for each group, totaling 56 individuals. None of the participants had any previous experience with LEARN!. The participants were told that the content of the lectures was crucial for playing the game successfully. The duration of scenario exploration had to be long enough to examine the structural help provided and to acquire a certain level of expertise. Thus, participants had to play 15 game periods per 45-minute scenario session. The use of different scenarios should lead to more stable results because it was impossible for the participants to improve by just repeating patterns of behavior (Süß, 1996). To measure history-related and future-related results, game performance in each scenario was measured using two scores comprising development of capital resources (equity capital, increased by cumulated profits over time) and quality of actual products and products under development (as an indicator of the companies’ preparedness for the future). Participants were told that both scores were equally important and that they had to maximize both scores. The scores were displayed during the complete game run on the screen. As a final measure, the mean value of the two scores was calculated. Learning was measured using a specific knowledge test designed for LEARN!. This test differs from widely used knowledge tests, for instance, the Heidelberger Struktur-Lege-Technik (Scheele & Groeben, 1984). The main focus of our tests does not lie in the correct reproduction of abstract system relations but rather in answering questions about consequences of certain actions in a certain situation: Participants had to give prognoses about future system states. Thus, a qualitative instrument was used to assess the qualitative (and not easily quantifiable) goal of knowledge accumulation. For example, participants had to answer the question whether, in a given situation, lowering of the product price would increase sales revenues. An understanding of the system structure should help in fulfilling this task but it is not enough to memorize and to reproduce, for instance, given causal-loop diagrams to accomplish this goal. Similar to the simulation game, participants have to predict results of decisions. The questions comprise all decision areas of LEARN! and are embedded in a consultancy context, that is, participants take over the role of a consultant showing the management of LEARN! the results of their actions. In addition to the advantage of not asking for memorized data, this method prevents the danger of having answers that heavily depend on situational circumstances. In this instrument, correct answers are absolute and solely based on factors that are well defined up front. This prognosis part of the knowledge test consists of 27 true/false questions. Each correct answer is counted as one point; each incorrect answer counts as one minus point. The knowledge test also contains a section in which participants have to complete two rudimentary causal-loop diagrams. This part of the knowledge test is analyzed using Funke’s (1985) formula for the quality of such diagrams. The overall knowledge

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

268

SIMULATION & GAMING / June 2000

test performance is calculated as the mean value of the two test scores (prognosis and diagramming). However, these two scores also can be examined separately. In the very beginning of the experiment, participants have to fill out a small questionnaire to control for potentially disturbing variables. They are asked about age, sex, field of studies, previous experiences with business simulators, and attitude toward game playing. Furthermore, a short questionnaire (12 questions) is used to determine participants’ level of general business knowledge.

Results of the Experiment Internal consistency concerning results obtained in knowledge tests is satisfactory with a Cronbach’s alpha of 0.81 (Cronbach, 1990). In contrast, Cronbach’s alpha for simulation scores is only between 0.43 and 0.49, depending on whether the history-related score, the future-related score, or the mean of both is examined. Thus, concerning scores, internal consistency is rather low. However, this result does not surprise us because there were deliberately three different scenarios chosen to test control performance in different situations and to suppress memory effects between the simulation runs. The business test showed no significant differences between the groups. (Detailed 9 results for all variables are given in the appendix. ) One can suppose the groups were equal concerning their preexperimental knowledge of general management issues. Also, the biographical questionnaire did not reveal any significant differences between the groups. The knowledge test showed significant differences between groups: In the preknowledge test, the only significant difference occurred between Treatment Group 2 and the control group, which outperformed this treatment group and achieved the best results over all groups (0.046, Mann-Whitney test). The control group also achieved the best results in the postknowledge test; again, compared to Treatment Group 2, they even did significantly better (0.015, Mann-Whitney test). Furthermore, in the postknowledge test, Treatment Group 3 achieved better results than did Treatment Group 2 (0.043, Mann-Whitney test). Therefore, groups differed in their performance in the knowledge tests. Of more importance for the given issue is a change (improvement) from preknowledge to postknowledge test (also see Figure 7). For the complete sample, a highly significant improvement of performance can be observed (0.000, Wilcoxon test). If the groups are examined separately, one can find strong significant improvements for Treatment Group 1 and Treatment Group 3 (0.004 and 0.003, respectively, Wilcoxon test) and no significant changes for the other groups. These findings do not change if performance in the causal-loop diagramming part of the knowledge test is calculated separately from the prognosis part of the knowledge test. However, if the improvement is compared between groups, no significant differences can be found. That is, it cannot be deduced from the data that some groups acquire more knowledge than others.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

FIGURE 7:

269

Knowledge Test Results for Experimental Groups

The mean scores that the players achieved during the game runs do not improve from Scenario 1 to Scenario 3. This result was expected because the difficulty of the scenarios increased. The only significant difference in game scores (calculated over all scenarios) is found for Treatment Group 3, which outperformed Treatment Group 2 (0.027, Mann-Whitney test). A more detailed analysis shows that the two components of game score—history-related and future-related measures—have different impacts on group performance. Considering the changes of the future-related score (product quality), no significant differences between groups can be observed. When only the change of the history-related score (equity capital) is analyzed, Treatment Groups 1 and 3 perform better than Treatment Group 2 and the control group. Mann-Whitney tests indicate that the groups receiving information about the system’s structure by presentation performed significantly (p = .05) better in three cases, as indicated in the appendix. Only the comparison of Treatment Group 3 and the control group does not fulfill the required significance level (p = .098). Overall game scores and knowledge test performance show a correlation of .310 (at .05 level). In particular, game scores of the third scenario seem to be highly correlated to performance in the knowledge tests. This shows that criterion validity of the knowledge test can be assumed, that is, knowledge test performance predicts game performance to a certain degree. However, improvements in the knowledge test do not correlate with game performance. Thus, one cannot conclude that participants who learn more also play better, and vice versa.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

270

SIMULATION & GAMING / June 2000

The number of help calls shows no significant difference between the groups. Calculated over all three scenarios, the mean values for participants calling the structural help function are 9.07 (Treatment Group 1), 12.21 (Treatment Group 2), and 3.71 (Treatment Group 3) times. Furthermore, there is a substantial standard deviation to be observed for help calls (12.26, 12.21, and 4.07, respectively). No significant correlation can be found between the number of help calls and the gaming score: Only 6.8% of score variance is explained by different numbers of help calls. Because of the very low and unbalanced use of the structural help function, a modification of groups was also tested: Treatment Group 2 also could serve to enlarge the control group and Treatment Group 1 and Treatment Group 3 could establish a joint treatment group. This means that Treatment 2 (structural help function, X2) is removed from consideration and the experimental design shown in Figure 6. Thus, the 28 participants from Treatment Group 2 and the control group now establish the new control group; Treatment Groups 1 and 3 become the new joint treatment group. Using these new experimental groups, there were no significant differences observed concerning the business test, preknowledge test, postknowledge test, and changes between knowledge tests. However, an improvement occurred in game performance, which showed significantly better results for the treatment group (significance of .033 for game scores, Mann-Whitney test). These findings are in contrast to former findings reported in the literature (Süß, 1996). In summary, none of the hypotheses formulated above is supported by the original experiment. However, with modified groups, at least one of the remaining hypotheses is supported: Hypothesis 1a. Hypothesis 1b and Hypothesis 2b do not make sense with only one treatment: presentation about system structure.

Conclusions and Research Directions This article deals with two aspects: (a) an appropriate and general experimental design for investigations about the effectiveness of gaming-oriented simulation tools and (b) the application of this research framework to analyze the benefits of adding transparency about model structure in business simulators. 10 Although—to increase the sample size —the research project is still ongoing, already at this stage it can be stated that certain forms of structural transparency improve the ability of the participants to control a given business simulator. In contrast to many findings in the literature, it was shown that a presentation about the structure of a system has a positive influence on the performance in the control of this system. Furthermore, the presentation led to more homogeneous game results within groups, indicated by a smaller variance. Thus, additional work is needed to clarify the role of the presentation and the presenter or facilitator. These human-human interactions

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

271

were analyzed by Davidsen and Spector (1997). In contrast to the improved game results, participants were not able to transfer their acquired knowledge to solve the postknowledge test in the experiment. Participants were not capable of accessing the knowledge gained outside the gaming context. This lack of ability to transfer is presumably not caused by a lack of validity of the knowledge test. A significant correlation between test and control performance can be observed, which is a strong indicator of test validity. This is an improvement to many findings in the literature where test 11 validity was not found. Our results also contradict other research described in the literature where participants seemed not to be able to use their knowledge while controlling the complex system. Those experiments showed that participants were able to acquire knowledge, which allowed them to answer the questions in the knowledge test, but this knowledge was not suitable or at hand when it was needed to control the simulator. At least partly, these findings seem to be based on the form of knowledge test used, one that allowed participants to simply reproduce causal diagrams that had been shown to them before. As in most similar experiments, the actual use of the help function was very low in the setting described above. Effects of help functions containing structural information could not be measured, simply because people did not work enough with them. One can draw two conclusions: First, the importance of the help functions has to be emphasized even more during the experiment. Second, one has to consider different forms for providing structural information, either as a kind of help function or somehow more closely integrated in the simulator. The search for other means of producing significant learning effects besides an enhanced help function has to be intensified. Finally, the importance of time available to explore a system and therefore to learn about it cannot be neglected (Größler, 1997). The results of this experiment support the notion that the more complex a system is the more time is needed to understand it and to learn from it. Thus, with more time available, we assume participants perform better in both game performance and postknowledge test. This issue also addresses a principal dilemma for experimental approaches concerning the effectiveness of gaming-oriented simulation tools: time for systems’ exploration cannot be extended deliberately if one wants to apply the rigor of experiments (because time and attention of participants is limited); however, without sufficient time for systems’ exploration, no learning effects seem to occur. To summarize, as our research indicated, a presentation about system structure can be an important step from black-box business simulators to effective learning environments. In this way, transparency can combine with other aspects, such as constructive feedback from the tutor or facilitator, guides to improve performance, links to things already known (Gagné, 1985), or source material (Davidsen, 1996), to form a comprehensive framework for learning in and about complex systems.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

272

SIMULATION & GAMING / June 2000

Appendix Overview of Results of Experiment

Original group structure Business test TG1 M p TG2 M p TG3 M p CG M p Improvement between knowledge tests TG1 M p TG2 M p TG3 M p CG M p Knowledge pretest TG1 M p TG2 M p TG3 M p CG M p Game score TG1 M p TG2 M p

TG1

TG2

TG3

CG

5,6429 —

7,2143 0,3470

7,8571 0,1470

6,2857 0,7590

7,2143 —

7,8571 0,3090

6,2857 0,3290

7,8571 —

6,2857 0,1080 6,2857 —

5,1278 —

3,2471 0,6790

5,6257 0,8180

4,8792 0,1290

3,2471 —

5,6257 0,5200

4,8792 0,7300

5,6257 —

4,8792 0,0600 4,8792 —

27,6074 —

22,4175 0,3010

26,9635 0,8900

31,8912 0,1030



22,4175 —

26,9635 0,3350

31,8912* 0,0460*

26,9635 —

31,8912 0,0540 31,8912 —

18,9333 —

16,6695 0,0540

19,3807 0,3580

18,0849 0,4910

16,6695 —

19,3807* 0,0270*

18,0849 0,1830

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

273

Appendix Continued TG1 TG3 M p CG M p Knowledge posttest TG1 M p TG2 M p TG3 M p CG M p Change in equity TG1 M p TG2 M p TG3 M p CG M p

Modified group structure Business test TG1 + TG3 M p TG2 + CG M p Improvement between knowledge tests TG1 + TG3 M p TG2 + CG M p

TG2

TG3

CG

19,3807 —

18,0849 0,2650 18,0849 —

32,7352 —

25,6646 0,0660

32,5892 0,7830

33,9639 0,7650

25,6646 —

32,5892* 0,0430*

33,9639* 0,0150*

32,5892 —

33,9639 0,5970 33,9639 —

–16,4444 —

–37,2480* 0,0080*

–16,5913 0,8900

–28,6813* 0,0240*

–37,2480 —

–16,5913* 0,0100*

–28,6813 0,2510

–16,5913 —

–28,6813 0,0980 –28,6813 —

TG1 + TG3

TG2 + CG

6,7500 —

6,7500 0,6260 6,7500 —

5,3767 —

2,6599 0,1220 2,6599 — (continued)

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

274

SIMULATION & GAMING / June 2000

Appendix Continued TG1 + TG3 Knowledge pretest TG1 + TG3 M p TG2 + CG M p Game score TG1 + TG3 M p TG2 + CG M p Knowledge posttest TG1 + TG3 M p TG2 + CG M p Change equity TG1 + TG3 M p TG2 + CG M p

27,2855 —

TG2 + CG

27,1543 0,5770 27,1543 —

19,1570 —

17,3772* 0,0330* 17,3772 —

32,6622 —

29,8142 0,2790 29,8142 —

–16,5178 —

–32,9646* 0,0010* –32,9646 —

NOTE: TG1 = Treatment Group 1, TG2 = Treatment Group 2, TG3 = Treatment Group 3, CG = control group. *p < .05.

Notes 1. This article is about general business simulators. It does not primarily deal with simulation tools that are developed during modeling sessions with a particular client in a particular situation. 2. We prefer the term “business simulators” instead of “management flight simulators.” For a definition of terms related to computer-based simulation tools for learning purposes, see Maier and Größler (1998). 3. The effectiveness of other teaching approaches, for example, case studies, also is not proved (Lane, 1995). 4. In computer science, the corresponding term would be “white-box.” However, in the system dynamics field, the more intuitive term “transparent-box” has been used by José A. D. Machuca in oral communications since 1991. In addition, in the field of user interface development, the term “transparent” is often used to describe aspects of a system whose existence and characteristics are not known to the user. This must not be confused with the term’s use in this article.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

275

5. A mental model does not necessarily have to be complete and correct to allow for efficient control of complex systems (Ringelband, Misiak, & Kluwe, 1990). 6. For a more conceptual approach to the question of implementing transparency, see, for example, Größler (1998a). 7. Vensim also offers the possibility of “causal-tracing,” which allows the user to access causal structures of a model dynamically. How this approach could be used for providing comprehensive structural information has not been examined so far. 8. One can also consider the double function of the simulation tool used: it serves as a treatment as well as being a way to measure the results of such a treatment. Methodological problems concerning this point have not been widely discussed so far. See Funke (1993) for a criticism aiming in the same direction and the claim for a clear distinction between knowledge acquisition and knowledge application. A strategic mode within the simulation, which can easily be implemented in POWERSIM-based simulators, could be a means of separating knowledge mediation and control performance assessment. 9. Because normal distribution of results could not be supposed, nonparametric tests in particular, Mann-Whitney tests were used for comparing the groups. Mann-Whitney comparisons of independent groups are based on mean ranks. Because they convey more information, however, the actual group means are given in the appendix. 10. A larger number of cases also would allow smaller effect sizes to become significant from a statistical point of view (Hogg & Craig, 1965). In addition, it could allow other tests of the reliability of the experiment, for example, calculating the test-retest stability (Moser, 1993). 11. The correlation of game performance with the knowledge test described in this article is much better than that for another knowledge test that was used earlier in connection with LEARN! (Wittmann et al., 1996).

References Ashby, W. R. (1958). An introduction to cybernetics. New York: Chapman & Hall. Bakken, B., Gould, J., & Kim, D. (1994). Experimentation in learning organizations: A management flight simulator approach. In J.D.W. Morecroft & J. D. Sterman (Eds.), Modeling for learning organizations (pp. 243-266). Portland, OR: Productivity Press. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Houghton Mifflin. Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: HarperCollins. Davidsen, P. I. (1996). Educational features of the system dynamics approach to modeling and simulation. Journal of Structural Learning, 12(4), 269-290. Davidsen, P. I., & Spector, J. M. (1997). Cognitive complexity in system dynamics based learning environments. In Y. Barlas, V. G. Diker, & S. Polat (Eds.), 15th International System Dynamics Conference: Systems approach to learning and education into the 21st century (pp. 757-760). Istanbul, Turkey: Bogazici University Press. de Geus, A. P. (1988). Planning as learning. Harvard Business Review, 66(2), 70-74. Diehl, E. W. (1992). Participatory simulation software for managers: The design philosophy behind microworld creator. European Journal of Operational Research, 59(1), 210-215. Dörner, D. (1992). Die Logik des Mißlingens [The logic of failure]. Reinbek bei Hamburg, Germany: Rowohlt. Doyle, J. K., & Ford, D. N. (1997). Mental models concepts for system dynamics research. In Y. Barlas, V. G. Diker, & S. Polat (Eds.), 15th International System Dynamics Conference: Systems approach to learning and education into the 21st century (pp. 323-326). Istanbul, Turkey: Bogazici University Press. Forrester, J. W. (1994). System dynamics, systems thinking and soft OR. System Dynamics Review, 10(2/3), 245-256.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

276

SIMULATION & GAMING / June 2000

Funke, J. (1985). Steuerung dynamischer Systeme durch den Aufbau und Anwendung subjektiver Kausalmodelle [Controlling of dynamic systems by building and using subjective causal models]. Zeitschrift für Psychologie, 193(4), 443-465. Funke, J. (1993). Microworlds based on linear equation systems: A new approach to complex problem solving and experimental results. In G. Strube & K. F. Wender (Eds.), The cognitive psychology of knowledge (pp. 313-330). Amsterdam, the Netherlands: Elsevier. Gagné, R. M. (1985). The conditions of learning and theory of instruction (4th ed.). Fort Worth, TX: Rinehart and Winston. Goodman, M. (1994). Using microworlds to promote inquiry. In P. M. Senge, A. Kleiner, C. Roberts, R. B. Ross, & B. J. Smith (Eds.), The fifth discipline fieldbook (pp. 534-536). New York: Doubleday. Graham, A. K., Morecroft, J.D.W., Senge, P.M., & Sterman, J.D. (1994). Model-supported case studies for management education. In J.D.W. Morecroft & J. D. Sterman (Eds.), Modeling for learning organizations (pp. 219-241). Portland, OR: Productivity Press. Größler, A. (1997). As time goes by: Self-proceeding management simulations. In Y. Barlas, V. G. Diker, & S. Polat (Eds.), 15th International System Dynamics Conference: Systems approach to learning and education into the 21st century (pp. 371-374). Istanbul, Turkey: Bogazici University Press. Größler, A. (1998a). Design and implementation of transparency in business simulators (Working Paper No. 9806). Mannheim, Germany: Mannheim University, Faculty of Business Administration. Größler, A. (1998b). Structural transparency as an element of business simulators. In System Dynamics Society (Ed.), Proceedings of the 16th International Conference of the System Dynamics Society (p. 39). Quebec City, Canada: System Dynamics Society. Hogg, R. V., & Craig, A. T. (1965). Introduction to mathematical statistics (2nd ed.). New York: Macmillan. Isaacs, W., & Senge, P. M. (1994). Overcoming limits to learning in computer-based learning environments. In J.D.W. Morecroft & J. D. Sterman (Eds.), Modeling for learning organizations (pp. 267-287). Portland, OR: Productivity Press. Kluwe, R. H. (1993). Knowledge and performance in complex problem solving. In G. Strube & K. F. Wender (Eds.), The cognitive psychology of knowledge (pp. 401-423). Amsterdam, the Netherlands: Elsevier. Kreutzer, W. B. (1994). A buyer’s guide to off-shelf microworlds. In P. M. Senge, A. Kleiner, C. Roberts, R. B. Ross, & B. J. Smith (Eds.), The fifth discipline fieldbook (pp. 536-542). New York: Doubleday. Lane, D. C. (1995). On a resurgence of management simulations and games. Journal of the Operational Research Society, 46, 604-625. Langley, P. A. (1995). Building cognitive feedback into a microworld learning environment: Results from a pilot experiment. In T. Shimado & K. Saeed (Eds.), System dynamics ’95 (pp. 628-637). Cambridge, MA: System Dynamics Society. LEARN! Strohhecker, J. (1997). Schwetzingen: Simcon GmbH (Friedrich-Ebert-Strasse 70, 68723 Schwetzingen, Germany). Machuca, J.A.D. (1990). A new generation of business models/games for management education. In G. E. Lasker & R. R. Hough (Eds.), Advances in support systems research (pp. 478-482). Windsor, Canada: International Institute for Advanced Studies in Systems Research and Cybernetics. Machuca, J.A.D. (1991). When creating and using games, are we neglecting the essential of system dynamics? In K. Saeed, D. Andersen, & J. Machuca (Eds.), System dynamics ’91 (pp. 329-335). Bangkok, Thailand: System Dynamics Society. Machuca, J.A.D. (1992). Are we losing one of the best features of system dynamics? System Dynamics Review, 8(2), 175-177. Machuca, J.A.D., & Carrillo, M.A.D. (1996). Transparent-box business simulators versus black-box business simulators: An initial empirical comparative study. In G. P. Richardson & J. D. Sterman (Eds.), Proceedings of the 1996 International System Dynamics Conference (pp. 329-332). Cambridge, MA: System Dynamics Society. Machuca, J.A.D., Ruiz, J. C., Domingo, M. A., & González, M. M. (1998). Our 10 years of work on transparent-box business simulation. In System Dynamics Society (Ed.), Proceedings of the 1998 International System Dynamics Conference (p. 5). Quebec City, Canada: System Dynamics Society.

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

Größler et al. / ENHANCING LEARNING CAPABILITIES

277

Maier, F. H., & Größler, A. (1998). A taxonomy for computer simulations to support learning about socio-economic systems. In System Dynamics Society (Ed.), Proceedings of the 16th International Conference of the System Dynamics Society (pp. 57-58). Quebec City, Canada: System Dynamics Society. Milling, P. M. (1991). Strategische Planungs- und Kontrollsysteme zur Unterstützung betrieblicher Lernprozesse [Strategic planning and control systems for the support of organizational learning processes]. In P. M. Milling (Ed.), Systemmanagement und Managementsysteme [Management of systems and management systems] (pp. 11-31). Berlin, Germany: Duncker & Humbolt. Milling, P. M. (1996). Simulationen in der Produktion [Simulation in production control]. In W. Kern, H. -H. Schröder, & J. Weber (Eds.), Handwörterbuch der Produktionswirtschaft [Handbook of production management] (pp. 1840-1852). Stuttgart, Germany: Schaffer-Poeschel. Milling, P. M., & Maier, F. H. (1997). On the effectiveness of corporate gaming environments and management simulators. In Y. Barlas, V. G. Diker, & S. Polat (Eds.), 15th International System Dynamics Conference: Systems approach to learning and education into the 21st century (pp. 335-339). Istanbul, Turkey: System Dynamics Society. Morecroft, J.D.W. (1994). Executive knowledge, models, and learning. In J.D.W. Morecroft & J. D. Sterman (Eds.), Modeling for learning organizations (pp. 3-28). Portland, OR: Productivity Press. Moser, K. (1993). Planung und Durchführung organisationspsychologischer Untersuchungen [Planning and implementation of examinations in organizational psychology]. In H. Schuler (Ed.), Lehrbuch Organisationspsychologie [Psychology in organizations] (pp. 71-105). Bern et al., Switzerland: Hans Huber. Niegemann, H. M., & Hofer, M. (1997). Ein Modell selbstkontrollierten Lernens und über die Schwierigkeiten, selbstkontrolliertes Lernen hervorzubringen [A model for self-controlled learning and on the difficulties to produce self-controlled learning]. In H. Gruber & A. Renkl (Eds.), Wege zum Können: Determinanten des Kompetenzerwerbs [Ways to know-how: Determinants of acquiring competences] (pp. 263-280). Bern et al., Switzerland: Hans Huber. Paich, M., & Sterman, J. D. (1993). Boom, bust, and failures to learn in experimental markets. Management Science, 39(12), 1439-1458. Pidd, M. (1993). Computer simulation in management science (3rd ed.). Chichester, UK: Wiley. Putz-Osterloh, W. (1993). Unterschiede im Erwerb und in der Reichweite des Wissens bei der Steuerung eines dynamischen Systems [Differences in acquisition and range of knowledge in controlling a dynamic system]. Zeitschrift für experimentelle und angewandte Psychologie, 40(3), 386-410. Richardson, G. P., & Pugh, A. L. (1983). Introduction to system dynamics modeling with dynamo. Cambridge: Massachusetts Institute of Technology Press. Ringelband, O. J., Misiak, C., & Kluwe, R. H. (1990). Mental models and strategies in the control of a complex system. In D. Ackermann & M. J. Tauber (Eds.), Mental models and human-computer interaction 1 (pp. 151-164). Amsterdam, the Netherlands: Elsevier. Scheele, B., & Groeben, N. (1984). Die Heidelberger Struktur-Lege-Technik (SLT) [The Heidelberg structure-technique]. Weinheim, Germany: Beltz. Senge, P. M., & Sterman, J. D. (1992). Systems thinking and organizational learning: Acting locally and thinking globally in the organization of the future. European Journal of Operations Research, 59, 137-150. Spector, J. M., & Davidsen, P. I. (1997). Constructing effective interactive learning environments using system dynamics methods and tools: Interim report, EIST Publication and Reports, No. 1. Bergen, Norway: University of Bergen. Sterman, J. D. (1994). Learning in and about complex systems. System Dynamics Review, 10(2/3), 291-330. Süß, H.-M. (1996). Intelligenz, Wissen und Problemlösen [Intelligence, knowledge, and problem solving]. Göttingen, Germany: Hogrefe. Süß, H.-M., Beauducel, A., Kersting, M., & Oberauer, K. (1992). Wissen und Problemlösen [Knowledge and problem solving]. Paper presented at the 38th Congress of the German Psychology Society, Trier, Germany. Wittmann, W., Süß, H.-M., Schulze, R., Wilhelm, O., Conrad, W., Wagener, D., & Oberauer, K. (1996). Determinanten komplexen Problemlösens [Determinants of complex problem solving]. Report on the

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.

278

SIMULATION & GAMING / June 2000

project “Problemlösungskapazität als Funktion von Intelligenz, Wissen und Persönlichkeit” [Problem-solving capacity as a function of intelligence, knowledge, and personality]. Unpublished manuscript. Zimbardo, P. G. (1992). Psychology (5th ed.). Berlin, Germany: Springer.

Andreas Größler is a research and teaching assistant at the Institute of Technology (Industrieseminar) at Mannheim University. His main research interests lie in the intersection of computer science, management science, and psychology. Frank H. Maier is a senior research assistant and lecturer at the Institute of Technology (Industrieseminar) at Mannheim University. He received a doctorate from Mannheim University. His research concentrates on the use of system-dynamics-based computer simulations to enhance understanding and learning in the context of management decision making as well as success factors in manufacturing. Peter M. Milling is a professor of management. He holds the chair for general management and operations management and is director of the Institute of Technology (Industrieseminar) at Mannheim University. His research focuses on systems thinking, computer simulation, and manufacturing strategy. ADDRESSES: Andreas Größler, Industrieseminar, University of Mannheim, Schloss, 68131 Mannheim, Germany; telephone +49 (0)621-181-1583; fax +49 (0)621-181-1579; e-mail [email protected]. Frank H. Maier, Industrieseminar, University of Mannheim, Schloss, 68131 Mannheim, Germany; telephone +49 (0)621-181-1585; fax +49 (0)621-181-1579; e-mail [email protected]. Peter M. Milling, Industrieseminar, University of Mannheim, Schloss, 68131 Mannheim, Germany; telephone +49 (0)621-181-1577; fax +49 (0)621-181-1579; e-mail [email protected].

Downloaded from http://sag.sagepub.com at PENNSYLVANIA STATE UNIV on April 17, 2008 © 2000 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.