Circumscribing system dynamics modeling and

0 downloads 0 Views 998KB Size Report
Nov 1, 2017 - revisits the problem of confidence in system dynamics models addressing policy ...... Introduction to System Dynamics Modeling with Dynamo.

Circumscribing system dynamics modeling and building confidence in models A personal perspective Khalid Saeed1 Nov 2017 Many years ago, a maverick bird hatched into a well-respected flock of peacocks. Peacocks, especially of this flock, were known for their stately walk. However, the maverick peacock, let us call him Junior, walked funny – more like an unshapely duck. All peacocks of the flock worried about junior. How could he get ahead in life with that funny walk? Junior did have a unique quality, though: he could fly beautifully while the other peacocks of the flock only walked beautifully.


While there is a consensus among system dynamics scholars that the dichotomous term validity must be replaced by the term confidence for system dynamics models, it is unclear what qualifies as a system dynamics model – a computational instrument for forecasting, or an experimental tool to inform the policy process? And what exactly needs to be done to build confidence in a model? Confidence building process is described in the system dynamics writings at a rather philosophical level that can be used to justify almost any model. The confidence building procedures provided in the text books are sketchy, do not distinguish between forecasting and policy models and do not adequately describe the iterative process subsumed in the various steps of model construction that might yield confidence. Confidence in forecasting models is an article of faith no matter how detailed they might be and how diligent is their calibration. Forecasting models are albeit irrelevant to system dynamics practice, which must focus on policy. This paper revisits the problem of confidence in system dynamics models addressing policy and attempts to carefully describe their qualification and the process that practitioners must follow to arrive at them.

Professor, Social Science and Policy Studies, Worcester Polytechnic Institute, Worcester, MA 01609, USA. Contact: [email protected] 1

Page 1 of 27


In so far as the famous Box (1976) axiom: all models are wrong, what we perceive can indeed be very different from what exists when the reality must be known through our limited perceptions. Berger and Luckman (1966) even alluded to the possibility that we may never be able to claim to have a right model because of our perceptual limitations. In a light-hearted account, Abbott (1952) illustrated how limited perceptions might reduce a 3-dimensional reality to a 2dimensional Flatland that is very different from the reality. Albeit, we must see the world though the reality we construct through a model, mental or explicit and thus model validation must be tied to some attributes of the wrong models we create. Both philosophical and practical aspects of the validation process for system dynamics models have been addressed by many scholars, notably, by Meadows (1980), Graham (1980), Barlas (1996) and Lane (2015). Tests for building confidence in system dynamics models, originally outlined by Forrester and Senge (1980) have been reiterated by Sterman (2000), Morecroft (2007) and Richardson (1981) in their respective textbooks, but there exists a lot of variability in their use as well as in the outcomes of their use.

Jay Forrester defined the key model validation principle as establishment of confidence in the soundness and usefulness of a model (Forrester 1971). Richardson (2011) elaborated on this principle by suggesting that the validation question must not be: “Is the model valid” but “Is the model suitable for its purpose and the problem it addresses?” When that question is focused on usefulness rather than seeking a dichotomous answer, the term validity is no more relevant and is replaced by confidence, suitability and usefulness. Confidence, suitability and usefulness as validity principles suggest moving away from the dichotomy of valid and invalid to a continuum of purpose-related criteria. Indeed, soundness, suitability and usefulness can be defined as a range, 0% to 100%, while validity might be tied to right or wrong outcomes. There is a pitfall though in moving away from the dichotomous definition of validity: Soundness, suitability and usefulness can be interpreted in many ways, including some self-serving ones and all kinds of models may be able to meet those requirements. This raises another question: should the term system dynamics models be circumscribed?

Page 2 of 27

I’ll attempt in this paper to first define the principles that should qualify a model as system dynamics and then describe the confidence building process that should make it sound and suitable for the its intended purpose. I’ll also attempt to outline the iterative procedures required to build confidence in system dynamics models and how they must be applied to each step of the modeling process. I know some of our colleagues might hate the divisiveness I am proposing that would exclude a good proportion of the models now called system dynamics, but I’ll take that risk.

What qualifies as a system dynamics model?

Let me begin with two modeling episodes both addressed to the same problem, but implemented in succession: 2


A service organization enlists a highly reputable research agency to construct a model of

its service program. This agency employs system dynamics to create an elaborate model giving forecasts of the demand for the service. The research agency claims that its forecasts are valid since the model it constructed subsumed considerable detail and that it has been carefully calibrated. These forecasts are used by the service organization to justify its budget projections. The budget is obtained and additional service infrastructure built, but remains unexpectedly underutilized.


The service organization commissions another modeler to help understand the reasons for

the underutilized capacity. After many conversations with the client, this modeler constructs a gaming environment addressing a related problem in a different domain. The game is driven by a very simple model in which the service infrastructure can be pro-actively upgraded by linking it to the revenue stream. The actual impact of service, however, depends on service delivery that depends on effective utilization of the infrastructure. The service delivery capacity is never explicitly known and its inadequacy is discerned only when there are complaints or when the These are real episodes. Names of the concerned organizations and people have been withheld for confidentiality. 2

Page 3 of 27

infrastructure is under-utilized. This disposition can spiral into leading the organization into failure even when there is adequate infrastructure. The gameplay was designed to invoke thinking by the players on the problem. When played by middle managers in the organization, the players could easily relate to the game structure and point out that in fact, their organization faces a similar problem.

Which of the two models constructed in the above situations is valid when measured on the criteria of suitability to purpose and usefulness? Perhaps both. The purpose of the first was to justify a budget request. The second was aimed at understanding a given problem. And both accomplished their intended objectives. The two models are however very different and call for different ways to construct and build confidence in them. Figure 1 shows a map of a typical policy process. It highlights the inputs from the two types of models. Forecasting models provide estimates of the impending future. Policy models create recipes for dealing with the impending future.

The first type of models is computationally complex instruments whose output is not verifiable. The second type is simple constructs that may or may not be cognizant of the structure underlying the need for policy.

Figure 1 Page 4 of 27

Models in the policy process

Use of an elaborate forecasting instrument together with a simplistic policy construct is a fatal recipe that will invariably lead to unintended consequences, which system dynamics modeling is expected to bring to fore. While both types of models are suitable for the purpose they are built, their confidence building procedures would differ since the two represent different slices of reality as I have discussed in Saeed (1992).

The slice of reality represented by forecasting models

According to the principles I outlined in Saeed (1992), the forecasting models represent a slice of reality that subsumes history as it unfolds in specific situations (Figure 2). They should replicate the complex behavior arising out of simultaneous occurrence of patterns like growth, oscillation and other multiple trends.

Time Separated Modes Mode Space in a Complex System

Slice representing A high fidelity model

Geography Separated Modes Simultaneous Modes

Figure 2

The slice of reality subsumed in high fidelity models

(Adapted from Saeed 1992)

Page 5 of 27

Sometimes euphemistically called high fidelity models, they often incorporate much computational complexity and invariably have an arbitrary boundary and not much feedback structure. A lot can be thrown into them depending on the modeler and the client preferences. The validity of such models is however an article of faith since their predictions cannot be verified or precisely explained in terms of their complex structure. The confidence in a highfidelity model is built at the outset by creating an impressive instrument usually subsuming a lot of detail. Such a model is often calibrated using some estimates of parameters, which may be further tweaked to replicate history. These parameters may also be computed using formal algorithms that select a parameter set minimizing the deviation between the model behavior and the historical records, in which case, they are model dependent. In some instances, the model might even directly be used for prediction of future without replicating history while its parameters are estimated independently of its structure as originally recommend by Graham (1980). The ability to track history as well as the detail the model subsumes are taken as testaments of its ability to forecast future without even providing a feedback explanation of the model behavior. Mind you, replicating history is not difficult if the model has a simplistic feedback structure. Looking at the original modeling endeavors of Forrester and his students, I am pretty certain that system dynamics was not intended to be applied to forecasting, but to enriching the policy process. As I have alluded to in Saeed (1992), models for refining the policy process subsume a different slice of reality than those constructed for forecasting.

The slice of reality subsumed in a policy model

In a recent personal communication to my colleague Bob Eberlein and me about his unwritten book on economic dynamics, Jay Forrester asserted his position on system dynamics modeling as follows: “On modeling, we will set a fundamentally different approach from what is currently dominant. Models are not derived from historical time series but from the mental data base and the written data base that reflect the microstructure and the governing policies of the real world. Forecasting of future conditions is not a measure of model suitability because forecasting is nearly impossible, and the goal is to understand how changes in policies affect

Page 6 of 27

behavior. Historical time series are not inputs to model building but should be used to compare model behavior with the real economy”. 3

Clearly, Forrester viewed system dynamics effort to be directed at refining policy, not forecasting. Policy models have been referred to by many names, such as low fidelity models, generic systems, standard models, canonical forms, archetypes, metaphors etc. They portray a very stylized view of a pervasive structure. They are great tools for creating a refined policy framework, since they subsume structure for understanding the nature of an experienced pattern of behavior and changing it towards a desired outcome. Building policy models and building confidence in them requires an entirely different approach as their behavior may not replicate any recorded history.

Figure 3 illustrates how policy models can be enriched using system dynamics. The simple policy paradigms are driven by constructs that are obvious in a system. They often lie in the immediate vicinity of the problem symptoms and represent only the tip of the iceberg. The root causes reside on the other hand in the latent structure that is a step or two removed from the symptoms. This latent structure would create resilience to and unintended consequences for any interventions designed on the basis of the visible structure to alleviate symptoms. Therefore, Policy models must subsume the latent structure that might often be hidden from view. Latent structure can sometimes be discerned through attempting to understand policy failure and interpreting a variety of happenstances as multiple manifestations created by unique situations. Such an effort can lead to constructing low fidelity metaphorical models that apply to many situations and that can be experimented with to understand consequences of past interventions and create robust new policy designs.


From a personal communication to Eberlien and Saeed on the content of Forrester’s unwritten book on Economic Dynamics, June 27, 2015

Page 7 of 27

Structure driving Simple policy paradigms

Structure creating unintended consequences

Figure 3

Visible system

Latent system

The general structure of a low fidelity policy model

This brings us to the 64000-dollar question: How to capture what is beneath the surface? The answer to this question lies in slicing the problem differently from the process I described for creating a high-fidelity model. Figure 4 illustrates the slice of reality we must capture in a model if it is to be applied to improving the policy process. The reference mode for a policy model will not be based on the simultaneous multiple modes of behavior available in historical data, but on multiple manifestations of the problem experienced in different places, different organizations, and over different periods of history. To create a reference mode for such a model, we must first construct the long run dynamics that cover boom as well bust times, functional as well dysfunctional outcomes. For this, we must observe multiple manifestation appearing in different situations, different institutions, different places and different periods of history. We must also decompose complex historical behavior into its simpler components and consider multiple manifestations in the context of a single component.

The slice of reality constructed from this process maps into the red partition of our behavior space as shown Figure 4, which is again adapted from Saeed (1992). Creating such a slice requires partitioning complex historical patterns into simpler parts, but constructing long term dynamics that subsume time separated modes as well as multiple outcomes and the hope and fear scenarios subsuming geography separated modes. Page 8 of 27

Time Separated Modes Slice representing a Low fidelity but ubiquitous policy model

Mode Space in a Complex System

Geography Separated Modes Simultaneous Modes

Figure 4

Slicing information for creating a ubiquitous structure for policy design.

(adapted from Saeed 1992)

The model resulting from such a reference mode does not fit any specific situation. It will not replicate any recorded history, but might produce the variety of specific patterns, including multiple equilibria, appearing in recorded history and in fear and hope scenarios that project past trends into future. It will also not give quantitative forecasts of future, but it can create future scenarios and identify policies leading to them.

Examples of policy models

The use of generic models as policy instruments has been addressed by Meadows (1970), Graham (1988), Eberlien and Hines (1996), Lane (1996) and Morecroft, et al (1995). Forrester often said that some 20 or so ubiquitous structures can possibly be applied to addressing most policy issues around us. The ubiquitous structures he presented include Advertising (Forrester 1959), Market Growth (Forrester 1968), Industrial Dynamics (Forrester 1961), Urban Dynamics (Forrester 1969), and World Dynamics (Forrester 1971). The former two describe aspects of corporate growth that many organizations wrestle with. The latter four, together with his unfinished National Model represent a restructured theory of economic behavior that can be applied to managing firms, markets, regions and economies, which I have attempted to explain in Saeed (2015) and Saeed (2015a).

Page 9 of 27

I have also attempted to collect about a dozen generic models described as parables as examples of policy models (Saeed 2016). Table 1 describes the policy issues, the simplistic models often used to address them and the refined system dynamics policy structures I propose for replacing them. Symptoms

Linear policy paradigms

Long term policy outcomes

Hunger, Poverty Underdevelopment, Stagnation Underutilized infrastructure


Intervention burden Overshoot and decline


Recovery and decline

Latent capacity constraints

Pests, crime, dissent, insurgence Poverty, subsistence

Pest control

Problems persist or intensify

Latent capacity support

Pick lowhanging fruit

Famine follows feast




Breakdown repair

Need for heroic interventions

Interconnected conservative system


Corruption, non-legitimate production, mafias, anarchy Economic ups and downs, surpluses and shortages, feasts and famines Failing organizations Resource shortage

Winning battles

Loosing war

Dynastic cycle


More firefighting


Preventing failure Digging deeper

Innovation suppressed Depletion

Aging chains

Diminishing yield Low output

Harvest more Produce more

Unsustainable use Stagnation


1 2





9 10

11 12


Latent structures causing policy resilience Iron Law Capacity limits

Resource basket

Low productivity

Metaphorical SD model describing the latent structure Food relief. (Runge 1975) Sahel, (Picardi & Siefert 1976)

Alternative policies cognizant of latent structures Social services

People express, (Sterman 1988, Saeed 2015b) Stray dogs, Saeed (2008)

Latent capacity augmentation

Snake poaching, Saeed (2003): WPI course SD551 Watershed, Saeed (2012): WPI Watershed demo Farmers, Bandits, Soldiers (Saeed and Pavlov 2008) PID control, Saeed (2009)

Interdependence preservation

Urban Dynamics, (Forrester 1969) Resource basket, (Saeed 1985, 2004, 2013)

Creative destruction Severance penalties, Mitigation banking Commons management Balanced investment decisions

Fishbanks, Meadows (1989) Capability Traps (Repenning and Sterman (2002)

Capacity preservation

Latent capacity elimination

Watershed management

Resource allocation

Control with functions of error

Table 1: Linear policy paradigms and the policy models proposed to replace them (Source: Saeed 2016)

Page 10 of 27

This is not an exhaustive list, and not given in any order, but it might appear to subsume a large number of public policy situations.

How should policy models be validated? Forrester posited that the process of building models is more important than the models arising from it (Forrester 1985). This process and the model it creates can indeed take the simplistic policy paradigms to the next level. At the outset, it must select a different slice of reality than the one forming basis of a forecasting model as illustrated in the previous section. Hence historical data becomes irrelevant and may not directly serve as reference mode, while the many manifestations of a slice of reality appearing in different periods and geographic locations must be used to discern a reference mode for the model. A confidence building process must nevertheless meet the requirements of science. To meet these requirements, a model must exist, be verifiable, be unique for the problem being addressed and be refutable (Casti 1981). Different modeling practices have created different procedures to implement the reality checking process. System dynamics practitioners have outlined a modeling procedure consisting of 5 or so steps, including creation of a reference mode, a dynamic hypothesis, a structurally valid mode, a behaviorally valid model and policy design experiments. At the outset, the outcomes of each of the modeling steps must be redefined. Here is a statement of these steps and what they really mean. 1. Reference mode – not historical time series but an abstract representation of system behavior. 2. Dynamic hypothesis – not a complex feedback map, but an abstract and aggregate mental model of system structure expressed in terms of a simple feedback map that can explain reference mode

Page 11 of 27

3. Structurally valid model – not a complex stock and flow model with an arbitrary boundary, but a stylized and reality checked computer representation of the decision process in each aggregate subsystem in the dynamic hypothesis 4. Behaviorally valid model – not a model that merely replicates history and creates future scenarios, but a robust, stylized and reality checked structure that can create the many behavior modes delineated in the reference mode. 5. Policy design – not a normative statement, but an abstract metric of real life interventions that can be mapped into the model and simulated to assess their performance. Unfortunately, there is no straight-forward way of arriving at any of the abstract concepts these steps aim to create. Confidence in a model requires careful reality checks leading to the desired outcome of each step and a confidence building procedure must address all steps of modeling and subsume the many iterations called for in each. Figure 5, which is taken from Saeed(1992) outlines an iterative process that can guide the implementation of these steps but it lacks the specific tools and procedures pertinent to each step, which I’ll attempt to describe in the next section. Literature

Alternative models Experience, literature

Empirical Evidence

Perception of System structure

System Conceptualization

Structure Validating Comparison and process Reconciliation

Behavior Validating process Model formulation

Representation of Model structure

Diagramming and Description tools

Empirical and Inferred time series

Comparison and Reconciliation

Deduction of Model behavior

Computing aids

Figure 5: The iterative procedure of system dynamics modeling Source: Saeed (1992)

Page 12 of 27

Implementation of confidence building steps

Each of the model building steps outlined in the last section require an iterative process to get to a valid abstract concept the must create which must be implemented following the process outlined in Figure 5, However, the implements of the process for each case for each case vary. The confidence building process is redefined for each step below:


Creating a sound reference mode

The reference mode is a graphical model of the problem. It is not just recorded historical data, but a stylized pattern - a fabric of inter-twined trends extracted from historical data whose phase relationships reveal their interdependent progression. Reference mode must therefore be surmised as an appropriate slice of the complex pattern to be investigated. If recorded history is a basis for constructing a reference mode, the multiple modes subsumed in it must be separated either by inspection or by using a filtering process (like averaging) that strains unneeded periodicities. The selected slice of the problem must be extended beyond short-term history. It must also subsume a continuum of experiences from different places and different periods of history and their extrapolated futures as well as the fear and hope scenarios we can surmise. It must not have noise or randomness typical of historical data. The time horizon of the reference mode must be long enough, so the experiences from multiple time periods are covered and they play out the logic of the fabric. Numbers vanish in the construction of reference mode, a fabric highlighting the pattern remains. I have discussed in detail the process of constructing a reference mode using historical information in Saeed (2003). Figure 6 shows the detail of the iterative process required for building a reference mode along with the specific implements it entails.

At the outset, the reference mode building process involves both constructing the time profiles of the individual variables as well as their collective fabric revealing their interdependence. It calls for graphically representing both the individual patterns and their collection. The empirical information guiding the reality checks in the construction of a reference mode resides both in historical information and stylized problem patterns like the various types of growth overshoot

Page 13 of 27

and oscillation as outlined in Sterman (2000). Many iterations of graphing and reconciliation with historical information as well as stylized problem patterns will be needed to create confidence in the individual profiles and their collection.

Empirical evidence and experience Historical information

Comparison/ reconciliation

Stylized problem patterns

Problem conceptualization

Confidence in form

Comparison / reconciliation

Confidence in concept

Graphical representation

Integrity as fabric of interdependent patterns Reference mode

Figure 6

Iterative process for constructing the reference mode

The pattern represented in reference mode preferably belongs to a familiar class of problems, like, growth, decline, overshoot and oscillation. Additionally, systems may tend towards or be in a homeostasis displaying stagnation, a dysfunctional equilibrium or poor performance against designated criteria. They may also display, policy resilience and path dependence. The iterative process of Figure 6 will extend over the following tasks for culminating into a sound reference mode:

1. Decomposition of historical information to identify a pattern of interest 2. Aggregation of historical information to exclude detail that is not of interest 3. Constructing missing variable profiles that establish the explanatory value of the fabric of patterns constituting the reference mode. 4. Guided by the interdependence of the individual profiles in the reference mode, the whole fabric can be intelligently extrapolated into the future beyond past experience. Thus, growth

Page 14 of 27

might culminate into an equilibrium or overshoot; decline into a demise or another equilibrium; and a change into a different homeostasis. 5. Multiple situations may reveal different patterns of change and different equilibria. 6. Fear and hope scenarios are sometime constructed when the interdependent fabric can reveal multiple manifestations or has missing variables that may lead to speculative extrapolations.

The process of constructing a sound reference mode calls for carefully learning about the problem, which is the key to defining the correct boundary for the model. Hence, we must dwell a bit on constructing the reference mode. The information in the reference mode a is not a direct basis for constructing the model, which must be built from experiential information, but the explanation of the interdependence of the trends included in the reference mode will lead to the construction of the initial mental model that we refer to as dynamic hypothesis. A carefully constructed reference mode will also serve as an important reality check for the soundness of the computer model we construct.


Constructing a sound dynamic hypothesis

The Dynamics hypothesis is a mental model. It does not distinguish between stocks and flows and is more aggregate than the computer model that it will lead to. Its building blocks are feedback loops that are created by strings of causal links. It must however not be more complex than what can be interpreted by the thought process, which means no more complex than a few coupled feedback loops. Thus, a dynamic hypothesis might represent relationships between broad model subsystems rather than between all elements of the detailed model. It must explain the intertwined pattern in the reference mode and should preferably belong to a known feedback archetype. Figure 7 shows a specific manifestation of the procedure of Figure 5 for creating an iterative process that can be used to formulate a dynamic hypothesis and build confidence in it.

Note, the body of information we draw on for reality checks in this process is experiential information, not historical information (Forrester 1980). This information will reside in individual perceptions of cause and effect as well as in the feedback archetypes corresponding to

Page 15 of 27

the reference mode we have constructed. I am assuming you will have a good inventory of feedback archetypes to pick from, for example, as outlined in Senge (1980).

Empirical evidence and experience Causal relations

Comparison/ reconciliation

Feedback archetypes

Aggregate feedback system

Confidence in Single feedback loops Feedback representation

Figure 7

Comparison / reconciliation

Confidence in Coupled feedback structure Dynamic hypothesis

Integrity as fabric of interdependent feedbacks

Iterative process for constructing the dynamic hypothesis

It should be pointed out that the step of creating a dynamic hypothesis can sometimes be skipped, especially when the model is not too big and complex. Referring to his Urban Dynamics Model, Forrester stated in his 2013 fireside chat:

I prefer to model by identifying the stocks that I want to work with. I did not start with causal loop diagrams or some of the things that are very popular. Start with the stocks. (Saeed 2013). Barry Richmond4 also often argued with me about the use of a dynamic hypothesis. He maintained that the model should fit a page and that an aggregate causal model implicit in a Barry Richmond (1946-2002) is credited with pioneering the graphics based modeling software Stellatm that opened computer based simulation modeling to a wide cross-section of practitioners. See Forrester JW. 2010. Foreword. In J Richmond, L Stuntz, K Richmond, J Egner (Eds). Tracing Connections, Voices of Systems Thinkers. Lebanon, NH: isee Systems. 4

Page 16 of 27

dynamic hypothesis is unnecessary. These positions are however contingent on the size of the model. If the model spans over several pages, an aggregate version not only helps conceptualize the detailed structure, it also serves as a reality check to verify the connectivity of the detailed structure. Forrester’s DYNAMO equations were indeed organized by sectors and the relationships between them were implicit in his narratives. Barry Richmond’s brain child – Stellatm allows organization of complex models into sectors or modules. It also allows viewing relationship among sectors/modules through an interface that should return the aggregate mental model stated as dynamic hypothesis at the outset.

The dynamics hypothesis will represent an aggregate rather than a detailed feedback structure. It must not be viewed as a set of single feedback loops in isolation but as a fabric of feedback relationships driving the behavior constructed in the reference mode and explaining any turning points in it. It will show if any significant delay processes exist in any of the feedback loops included in the fabric. It will also be cognizant of the coupling existing between the feedback loops that may create compensating adjustment paths leading to parameter insensitivity. A valid dynamics hypothesis constructed through this iterative process will be simple, aggregate and will explain all turning points included in the reference mode.


Building a sound stock and flow model

A computer model represents an elaboration of the dynamic hypothesis. Each element of the dynamic hypothesis would translate into a computer sub-model whose basic building blocks are stocks and flows. While all variables in a computer model can be identified as stocks or flows, some of them may be represented as converters, constant parameters and built-in functions. Stocks that change very slowly with respect to the majority of the stocks can be viewed as constants. Stocks that change very fast with respect to the majority of the stocks can be viewed as converters. Stocks that can serve as sources and sinks for unlimited and un-conserved flows can be excluded and are represented as clouds. Built-in functions represent sub-models abstracting more detailed structure.

Page 17 of 27

All flows are based on policies expressed as combinations of a small number of generic decision rules mimicking production, compounding, draining, stock adjustment and concomitant actions. Thus, the flow equations are not free form, but based on a small set of rules (Richmond 2004). Complex models, when not sliced into simpler models, can be organized in a hierarchical structure that has now become a software feature. 5

Building a computer model requires also an iterative procedure outlined in Figure 8 for its construction and confidence building, in which the knowledge of how decisions are taken and which generic systems are relevant to the problem serve as reality checks.

Experience Knowledge of generic decision rules

Knowledge of generic systems

Policy representation

Role system

Confidence in structure of individual subsystems

System representation

Confidence in Connectivity of subsystems

Stock and flow representation

Hierarchy and Connectivity of subsystems Computer model

Figure 8

Iterative process for constructing a computer model

The model mimics a role system underlying the problem being addressed. It must subsume the dynamic hypothesis constructed in the last step, but will have a lot more detail in it. Using Occam’s razor, a good model must include all roles relevant to the problem and its policy space but exclude all unnecessary structure. This cannot be done unless you have carefully implemented the previous steps - construction of the reference mode and its underlying dynamic hypothesis. In case of larger models, each element of the dynamic hypothesis might be elaborated into a subsystem. As in case of the reference mode and the dynamic hypothesis, a Sectors and modules in Stella and ithink can be used to create a hierarchical organization in a model 5

Page 18 of 27

stock and flow model cannot be built in one go. Each subsystem in the model must be constructed through an iterative process shown in the left side of the map of Figure 8, their connectivity and contribution to the problem being addressed must follow the iterative process outlined on the right side of this figure. Both stock and flow structure of the subsystems, and the hierarchy and the connectivity of the subsystems in a complex model, must be reconciled with how decisions are made in reality, and how various subsystems in the organization interact with one another.

Decision rules in the model represent patterns that arise from continuous implementation rather than from switching logic. Information determining decision rules must exist in the variables and the parameters of the system. Reality checks applied throughout model construction must assure the variables and constants included in the model correspond to what exists in reality. Parameters must be estimated from direct measurements independent of the model (Graham 1980). Model equations must have dimensional consistency and flow equations must remain functional under extreme conditions, as would also happen in reality. First order controls on stocks are often necessary to assure realism. Many rounds would be needed to elaborate the structure of the subsystems in the dynamic hypothesis and their connectivity. At the conclusion of this iterative process, the dynamic hypothesis must be recoverable from the structure of the computer model. This would be greatly facilitated if you have organized the model into a hierarchy of sub-models.


Model testing for behavioral validity

Once we have arrived at a model structure that we have confidence in, the model is subjected to several simulation tests. First of all, it is important to make sure that the model has an initial equilibrium under internally consistent conditions. This provides a pristine point of reference for further simulations. When disturbed from equilibrium by a simple disturbance, the model behavior can be easily explained by its structure as it is entirely generated endogenously. Also important is to assure that the model behavior makes sense under extreme conditions; e. g., if half of the capital in an economy is destroyed, the path of recovery is plausible. The model must respond to test functions (step and pulse disturbances, noise, ramp, etc.) in a credible way and must create the problem conditions subsumed in the reference mode during these tests.

Page 19 of 27

Figure 9 shows a map of the iterative process involving computer simulation that should guide the testing of model behavior. Knowledge of the generic systems outlined in Table 1 and a carefully drawn out reference mode help place the model behavior in the context of the problem being addressed. Model simulations must not be random but carefully designed to allow comparison of the model behavior against the knowledge of the generic systems it contains and the reference mode it must replicate. Piecewise testing will help understand the behavior of the model subsystems and how they influence the reference mode. Design and implementation of reasonable parameter sensitivity simulations allows knowing their influence on the behavior. The simulated patterns must be compared with patterns in the reference mode rather than with the numbers.

Empirical evidence and experience Knowledge of Generic systems

Comparison/ reconciliation

Reference mode

Experiment design

confidence in behavior of subsystems

Comparison / reconciliation

confidence in behavior of whole model

Behavior of subsystems

Behavior of complete model Model Simulations

Figure 9

Iterative process for testing a model for behavioral validity

Behavior reproduction must cover generating the symptoms of the problem, the multiple behavior modes experienced, phasing, frequencies and other characteristics of the behavior of the real system. Piecewise simulation will assure conformance to Occam’s razor. A behavioral anomaly should appear if an essential part of the structure is taken out. Model behavior should apply to a class of problems. A commodity model should be relevant to a variety of commodities. A supply chain model to a variety of supply chains.

Page 20 of 27

Only describing the outcomes of the simulations is not enough. The model behavior must be explained in terms of the major feedback loops subsumed in it. Such explanations will also verify the dynamic hypothesis. We must not use any relationships outside of the model for explaining its behavior, even though such speculative explanations are quite common in case of complex computational instruments as well as simplistic statistical models. In general, a causal explanation is more important than an exhaustive knowledge of parameter sensitivity. If we cannot explain behavior in terms of the model structure and interpret each parameter in terms of the behavioral attributes of the role players, our model may not serve as a useful policy instrument even though it might replicate the reference mode.

Sometimes, the model might return a mode that has not been recognized in the reference mode. This might initially be viewed as an error, but if it persists, problem statement should be further researched. It can signal a breakthrough. An example is the discovery of the long wave in Forrester’s model of the economy.

The modern computer simulation software often allows estimation of model parameters using sophisticated mathematical and statistical routines and make it easy to use them. Using such estimations provide the illusion of rigor in our model construction process although they may not add much to the model validity. There are several pitfalls in the outcome of these procedures. First, the parameters are estimated using data in the system slice in Figure 2, which may often be irrelevant to a policy model that represents the system slice in Figure 4. Second, the parameter set providing best fit, may not be unique as many sets can minimize the deviation between the historical data and the model generated results. Last but not least, the parameters will depend on the model structure and if all models are wrong, no confidence can be placed in their accuracy. No wonder the parameters so computed may often be quite different from those estimated from direct measurements independently of data. Richardson (1981) demonstrated this by attempting to estimate parameters of a synthetic models from its own behavior and ended up with a parameter set vastly different from those of the original model.


Policy experimentation for building confidence in system improvement strategies

Page 21 of 27

Creating a policy model and experimenting with it requires both methodological finesse and command over a problem domain, possibly achieved through real experience and not just from theory. System dynamics expertise w/o involvement of a practitioner or domain expertise is essential to adding value. Without this experience, we might end up with a model which issues policy recommendations that cannot be implemented. For example, models based on economic theory often call for removing market imperfections and improving rational agency while these attributes do not exist in real world (Saeed 2015a). Figure 10 interprets the iterative procedure of Figure 5 for the policy testing task.

Knowledge of method and problem domain Awareness of behavioral metrics

Comparison/ reconciliation

Knowledge of intervention channels and policy history

System synthesis

Understanding individual policy responses

Credibility of policy recommendations

Sensitivity scenarios Model and Operational policy space

Figure 10

Comparison / reconciliation

Policy scenarios

Iterative process for policy testing

Important reality checks require awareness of the behavioral metrics embodied in the model parameters, knowledge of policy history and its performance, and knowledge of the intervention channels available in the organization for which system improvement is sought. Policies leading to desirable and undesirable outcomes can be informed through parameter sensitivity analysis and careful experiments with structural change realized through connecting unused sources of information to decision points. A policy search, therefore, not only

Page 22 of 27

requires specifying a metric tied to the model parameters and testing the model sensitivity to changes in those parameters, it will also call for identifying information links that might change the system structure. Mind you, blind sensitivity analysis, how-so-ever extensive will not achieve this. Last but not least, any policy search calls for carefully thinking through the policy implementation process that is driven by the policy levers available in the organization. The first class of policy tests call for replicating the policy history. A model’s ability to replicate the impact of past interventions per available records further build up confidence as a policy design tool. Exploratory experimentation can then be conducted to find the policy set that not only improves overall performance in the end, but also minimizes unintended consequences over the course of change thus making the path to improvement least painful. The robustness of the best policy set we arrive at may be further investigated by running the model with variations in its level of aggregation and parameters as experienced in different manifestations of the problems. This will increase the reliability of the recommended policy set.


System dynamics was intended to be applied to refine the policy paradigms not forecast future. Albeit, but it has predominantly been applied to creating computational instruments that create unexplained and unverifiable forecasts that may serve as inputs to the policy process together with simplistic policy paradigms. The combination of these may invariably lead to unintended consequences. To reiterate Forrester’s position:

1. Dynamics models cannot be derived from historical time series. They must be constructed from the mental the written data bases that reflect the microstructure and the governing policies of the real world.

Page 23 of 27

2. Forecasting of future conditions is not a measure of model suitability because forecasting is nearly impossible, and the goal is to understand how changes in policies affect behavior. 3. Historical time series are not inputs to model building but should be used to compare model behavior with the real economy.

He also saw the modeling process rather than its end product as the most important take-away from modeling (Forrester 1985). This process and the model it creates can take the simplistic policy paradigms to the next level. This paper suggests that the modeling process is a progression through a series of abstract concepts each of which requires using many iterations that require different cognitive and experimental tools. More specifically:

1. Reference mode must be surmised as an appropriate slice of the complex pattern to be investigated. If historical pattern is used as a point of reference, the multiple modes subsumed in it must be separated either by inspection or by using a filtering process (like averaging) that strains unneeded periodicities. The selected slice of the problem must be extended beyond short-term history. It must also subsume a continuum of experiences from different places and different periods of history and their extrapolated futures. This may require constructing experience of multiple cases as well as constructing fear and hope scenarios. 2. The dynamic hypothesis must be visualized as an aggregate mental model explaining the intertwined patterns in the reference mode as an aggregate feedback map, not a spaghetti diagram including all variables in your proposed computer model. Visualize relationships between the model sectors or aggregate stocks rather than between specific variables. 3. The computer model must subsume the common structure underlying the multiple long-term patterns of the reference mode and should allow verification of the existence of the dynamic hypothesis in its structure. This can be accomplished by organizing the model as a hierarchy, the top layer of which returns the dynamic hypothesis. The basic structure of a computer sub-models can be selected from an existing inventory of low fidelity models and tailored to the specific situation your model represents.

Page 24 of 27

4. A behaviorally valid model will require experimentation with model parameters to replicate the multiple manifestations and the fear and hope scenarios you created in the reference mode. 5. Once we have a model amenable to policy testing, individual policy impacts can be understood by first interpreting each parameter as a behavioral attribute of a policy, then understanding the impact of change in it.

I have made an attempt in this paper to map the processes driving the iterations for implementing each of the above parts. In doing so, I might have circumscribed System Dynamics modeling, which we might need to do. Note: I acknowledge George Richardson’s help with improving Figure 5 and Raid Zaini’s help with redrawing Figures 6-10


Abbot EA. 1952. Flatland. London, England: Penguin Barlas Y. 1996. Formal Aspects of Model Validity and Validation in System Dynamics. System Dynamics Review. 12(3): 183-210 Berger PL, Luckman T. 1966. The Social Construction of Reality: A Treatise in the Sociology of Knowledge, Garden City, NY: Anchor Books. Box GEP. 1976. Science and Statistics. Journal of the American Statistical Association. 71: 791–799, Casti J. 1981. Systemism, System Theory and Social System Modeling. Regional Science and Urban Economics. 11(3): 405-424 Eberlein RL and Hines JH (1996). Molecules for modelers. In: Richardson GP and Sterman JD (eds). Proceedings of the 14th International Conference of the System Dynamics Society. System Dynamics Society: Cambridge: MA, pp 149–152. Forrester JW (1969). Urban Dynamics. Cambridge, MA: MIT Press Forrester JW (1971). World Dynamics. Cambridge, MA, MIT Press. Forrester JW, Senge PM. 1980. Tests for Building Confidence in System Dynamics Models. TIMS Studies in Management Science 14: 209-228 Forrester JW. 1959. Advertising: A Problem in Industrial Dynamics. Harvard Business Review. 37(2): 100-110 Forrester JW. 1961. Industrial Dynamics. MIT Press. Cambridge, MA. Forrester JW. 1980. Information sources for modelling the national economy. Journal of the American statistical asociation. 75(371): 555-566.

Page 25 of 27

Forrester JW. 1985. “The” model versus a modelling “process” System Dynamics Review. 1: 133–134 Forrester JW. 2010. Foreword. In J Richmond, L Stuntz, K Richmond, J Egner (Eds). Tracing Connections, Voices of Systems Thinkers. Lebanon, NH: isee Systems. Forrester, JW. 1968. Market Growth as Influenced by Capital Investment. Industrial Management Review. 9(2): 83-105 Graham AK. 1980. Parameter Estimation in System Dynamics Models. In J Randers (ed). Elements of System Dynamics Method. Cambridge, MA: MIT Press. Graham AK. 1988). Generic models as a basis for computer-based case studies. In: Homer JB and Ford A (eds). Proceedings of the 6th International Conference of the Systems Dynamics Society. System Dynamics Society: La Jolla: CA, p 133. Lane DC and Smart C (1996). Reinterpreting ‘generic structure’: Evolution, application and limitations of a concept. Sys Dyn Rev 12: 87–120. Lane, DC. 2015. Validity is a matter of confidence, but not just in system dynamics. System Research and Behavioral Science. 32: 450-458 Meadows DL (1970). Dynamics of Commodity Production Cycles. Productivity Press: Cambridge, MA. Meadows DL. 1989. Fishbanks, Ltd.: A microcomputer assisted simulation. Durham, NH: Institute for policy and Social Science Research. Meadows DM. 1980. The Unavoidable A Priori. In J Randers (ed). Elements of System Dynamics Method. Cambridge, MA: MIT Press. Morecroft JD, Larsen ER, Lomi A and Ginsberg A (1995). The dynamics of resource sharing: A metaphorical model. System Dynamics Review. 11: 289–309. Morecroft JDW. 2007. Strategic Modeling and Business Dynamics. London: Wiley Picardi AC, Siefert WW. 1976. A tragedy of the Commons in the Sahel. Technology Review. 78(6): 1-10 Repenning NP, Sterma JD. 2002. Capability traps and self-confirming attribution errors in the dynamics of process improvement. Administrative Science Quarterly. 47(2): 265-295 Richardson GP and Pugh AL. 1981. Introduction to System Dynamics Modeling with Dynamo. Richardson GP, Statistical Estimation of Parameters in a Predator-Prey Model: An Exploration Using Synthetic Data. System Dynamics Group, Sloan School of Management, Massachusetts Institute of Technology. Richardson GP. 2011. Reflections on the foundations of system dynamics. System Dynamics Review. 27(3): 219-243 Richmond BM. 2004. An Intro to Systems thinking. Lebanon, NH: Isee Systems Romer PM. 1986. Increasing Returns and Long-Run Growth. Journal of Political Economy. 94(5): 1002-1037 Runge D. 1975. The potential evil in humanitarian food relief programs. MIT System Dynamics Group Memo. No. D-2106-1 Saeed K 2003. Snake Poaching. Course materials, SD551, Worcester, MA: WPI, SSPS Dept. Saeed K 2012. Watershed Dynamics display model. WPI: SSPS Dept. Saeed K. 1985. An Attempt to Determine Criteria for Sensible rates of Use of Material Resources. Technological Forecasting and Social Change. 28(4). Saeed K. 1992. Slicing a Complex Problem for System Dynamics Modeling. System Dynamics Review. 8(3).

Page 26 of 27

Saeed K. 2003. Articulating developmental problems for policy intervention: A system dynamics modeling approach. Simulation and Gaming. 34(3): 409-436 Saeed K. 2008. Trend Forecasting for Stability in Supply Chains. Journal of Business Research. 61(11): 1113-1124. Saeed K. 2009. Can trend forecasting improve stability in supply chains? A response to Forrester's challenge in Appendix L of Industrial Dynamics. System Dynamics Review. 25(1): 63-78. Saeed K. 2013. Managing the energy basket in the face of Limits, A search for operational means to sustain energy supply and contain its environmental impact. In H. Qudratullah (Ed.). Energy Policy Modeling in 21st Century. New York: Springer Verlag. Saeed K. 2013. System Dynamics: A Disruptive Science? Transcript of a fireside chat with Jay Forrester. Cambridge, MA: Available at: Saeed K. 2015. Urban dynamics: A systems thinking framework for economic development and planning. ISOCARP Review. International Society of City and Regional Planners, The Hague, Netherlands. 11: 129-132. Saeed K. 2015a. Jay Forrester’s Operational Approach to Economics. System Dynamics Review. 30(4): 233-261. Saeed K, Harris, K, Ruege, A, Papa L, Milpuri, M. 2015b. Endogenous limits and bottlenecks: Improving Anticipation and Response in VHA Homeless Programs Operations: A strategic thinking exercise based on a simplified model of people express case. 33rd International Conference of System Dynamics Society. Cambridge MA: System Dynamics Society Saeed K. 2016. Systems thinking metaphors for illuminating fundamental policy dilemmas. WPI SSPS working paper No. 2016-001. Worcester, MA: WPI Saeed K. and O. Pavlov. 2008. Dynastic cycle: A generic structure describing resource allocation in political economies, markets and firms. Journal of Operations Research Society. 59(10): 1289-1298. Senge PM. 1990. The fifth discipline: The art and practice of learning organization. Doubleday/Currency Sterman JD. 1988. People Express Flight Simulator. Cambridge, MA: MIT Sterman JD. 2000. Business Dynamics, Systems Thinking and Modeling for a Complex World. Boston, MA: Irwin McGraw-Hill

Page 27 of 27

Suggest Documents