Military Operations Research

209 downloads 1467 Views 2MB Size Report
Hartley Consulting. Dr. Raymond ..... tion to optimize the decision space. Ultimately, ... search journal has been a leading forum for publication ..... examined some consulting work done with 12 ...... cusing on the analysis method or tool, such as.
zmy002080000C1.qxd

8/11/08

9:18 AM

Page 1

SPECIAL ISSUE: VALUE FOCUSED THINKING SPECIAL ISSUE EDITORS: DR. GREGORY S. PARNELL, FS AND DR. RAYMOND R. HILL

Military Operations Research Volume 13 Number 2 2008 A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY A JOURNAL OF THE MILITARY OPERATIONS RESEARCH SOCIETY

Military Operations Research A publication of the Military Operations Research Society

The Military Operations Research Society is a professional society incorporated under the laws of Virginia. The Society conducts a classified symposium and several other meetings annually. It publishes proceedings, monographs, a quarterly bulletin, PHALANX, and a quarterly journal, Military Operations Research, for professional exchange and peer criticism Editor Dr. Richard F. Deckro Department of Operational Sciences Air Force Institute of Technology AFIT/ENS, Bldg 641 2950 Hobson Way Wright-Patterson AFB, OH 45433-7765 Editors Emeritus Dr. Peter Purdue Naval Postgraduate School Editorial Board Dr. Patrick D. Allen General Dynamics Advanced Information Systems Dr. Paul F. Auclair LexisNexis Dr. Stephen J. Balut Institute for Defense Analyses Dr. Jerome Bracken IDA, RAND & Department of State Dr. Alfred G. Brandstein The MITRE Corporation Dr. Gerald G. Brown Naval Postgraduate School Dr. Marion R. Bryson, FS Consultant Maj Stephen P. Chambal AFRL Dr. Charles A. Correia Virginia Commonwealth University Dr. Paul K. Davis RAND and Pardee RAND Graduate School Dr. Jerry Diaz Homeland Security Institute Prof. Patrick J. Driscoll United States Military Academy

among students, theoreticians, practitioners and users of military operations research. The Society does not make or advocate official policy nor does it attempt to influence the formation of policy. Matters discussed or statements made in the course of MORS symposia or printed in its publications represent the opinions of the authors and not the Society. Publisher Corrina Ross-Witkowski, Communications Manager Military Operations Research Society 1703 N. Beauregard Street, Suite 450 Alexandria, VA 22311

Dr. Gregory S. Parnell, FS United States Military Academy LTC Barry C. Ezell, Ph.D. Army Capabilities Integration Center Dr. Bruce W. Fowler U. S. Army Research and Development Command Dr. Mark A. Gallagher HQ USAF/A9R Dr. Dean S. Hartley, III Hartley Consulting Dr. Raymond R. Hill, Jr. Wright State University Prof. Wayne P. Hughes, Jr., FS Naval Postgraduate School Dr. Jack A. Jackson Science Applications International Corporation Dr. James L. Kays Naval Postgraduate School Dr. Moshe Kress Center for Military Analyses, Israel & Naval Postgraduate School LTC Mike Kwinn US Military Academy Dr. Andrew G. Loerch George Mason University

Dr. James K. Lowe US Air Force Academy Mr. Brian R. McEnany, FS Consultant Dr. Michael L. McGinnis VMASC Dr. James T. Moore Air Force Institute of Technology Prof. W. Charles Mylander III United States Naval Academy Dr. Kevin Ng Department of National Defence, Canada Dr. Daniel A. Nussbaum Naval Postgraduate School Dr. Edward A. Pohl University of Arkansas Dr. Kevin J. Saeger Los Alamos National Laboratory Dr. Robert S. Sheldon, FS Group W Inc. Dr. Joseph A. Tatman Innovative Decisions, Inc. Mr. Eugene P. Visco, FS Visco Consulting Dr. Keith Womer University of Missouri – St. Louis

Military Operations Research, the journal of the Military Operations Research Society (ISSN 0275-5823) is published quarterly by the Military Operations Research Society, 1703 N. Beauregard Street, Suite 450, Alexandria, VA 22311-1745. The domestic subscription price is for one year and for two years; international rates are for one year and for two years. Periodicals Postage Paid at Alexandria, VA. and additional mailing offices. POSTMASTER: Send address changes to Military Operations Research, the journal of the Military Operations Research Society, 1703 N. Beauregard St, Suite 450, Alexandria, VA 22311. Please allow 4 – 6 weeks for address change activation. Military Operations Research is indexed in the following Thomson ISI威 services: Science Citation Index Expanded (SciSearch威); ISI Alerting Services威; CompuMath Citation Index威.

© Copyright 2008, Military Operations Research Society, 1703 N. Beauregard St, Suite 450, Alexandria, VA 22311. ISSN Number 0275-5823

❂ Printed on Recycled Paper

Executive Summaries .............................................................................. 3 Value-Focused Thinking and the Challenges of the Long War Gregory S. Parnell and Raymond R. Hill .................................................. 5

Table of Contents

Applying Value-Focused Thinking Ralph L. Keeney............................................................................................. 7

Volume 13, Number 2

Avoiding Common Pitfalls in Decision Support Frameworks for Department of Defense Analyses

2008

Robin L. Dillon-Merrill, Gregory S. Parnell, Donald L. Buckshaw, William R. Hensley, Jr., and David J. Caswell ........................................... 19

Bringing Value Focused Thinking to Bear on Equipment Procurement Jennifer Austin and Ian M Mitchell ............................................................ 33

The Use of Dynamic Decision Networks to Increase Situational Awareness in a Networked Battle Command Gerald C. Kobylski, Dennis M. Buede, John V. Farr, and Douglas Peters............................................................................................... 47

About Our Authors ..................................................................................... 65

Military Applications Society Institute for Operations Research and the Management Sciences

The Military Applications Society (MAS) is a democratically constituted professional society of open membership dedicated to the free and open pursuit of the science, engineering and art of military operations. It is the first society of the Institute for Operations Research and the Management Sciences (INFORMS). MAS advances research in military operations, fosters higher standards of practice of military operations research,

promotes the exchange of information among developers and users of military operations research; and encourages the study of military operations research. The society sponsors two open meetings annually, publishes a book series "Topics in Operations Research," and, among other activities, jointly publishes PHALANX and MILITARY OPERATIONS RESEARCH with the Military Operations Research Society.

President Dr. Edward A. Pohl Department of Industrial Engineering University of Arkansas Fayetteville, AR 72701 Phone: 479-575-6042 [email protected]

Vice President/President Elect Dr. Patrick J. Driscoll Department of Systems Engineering United States Military Academy Mahan Hall, Room 301 West Point, NY 10996-1779 Phone: 845-938-6587 [email protected]

Secretary/Treasurer LTC Brian K. Sperling Department of Systems Engineering United States Military Academy Mahan Hall, Room 301 West Point, NY 10996-1779 Phone: 845-938-4399 [email protected]

Awards Committee Chair and Former President Dr. Keith N. Womer Hearin Center for Enterprise Science University of Mississippi 662-915 5820 [email protected]

Council Member Dr. Alan W. Johnson Air Force Institute of Technology Dept of Operational Sciences Phone: 937-255-3636 x 4703 [email protected]

Council Member LTC Timothy E. Trainor, USA, Ph.D. Department of Systems Engineering United States Military Academy Phone: 845-938-5534 [email protected]

Dr. Russ R. Vane III General Dynamics Phone: 703-450-0137 [email protected]

Dr. Roger C. Burk Dept. of Systems Engineering United States Military Academy Phone: 845-938-4754 [email protected]

Former President Dr. Richard F. Deckro Air Force Institute of Technology [email protected]

Former President Dr. John P. Ballenger Raytheon, Inc. [email protected]

Former President Dr. Tom R. Gulledge George Mason University [email protected]

Philipp A. Djang US Army Research Labs [email protected]

Dr. Bruce W. Fowler US Army Aviation Missile Command [email protected]

Dr. Steve J. Balut Institute for Defense Analysis [email protected]

APPLYING VALUE-FOCUSED THINKING by Ralph L. Keeney The purpose of making a decision is to achieve one or more objectives. Hence, being very clear about what those objectives are should be the foundation for any thought, analysis, and action to achieve them. Value-focused thinking explicitly involves specifying, articulating, and organizing the objectives and then using them to guide all aspects of the decision process. This paper presents the key concepts of value-focused thinking and describes their relevance to decision making. It discusses how to apply and use these concepts in practice. The complementary roles of value-focused thinking for defining and structuring a decision and more standard decision models for analyzing that structured decision and for evaluating alternatives is outlined.

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS FOR DEPARTMENT OF DEFENSE ANALYSES by Robin L. Dillon-Merrill, Gregory S. Parnell, Donald L. Buckshaw, William R. Hensley, Jr., and David J. Caswell Many defense decision support frameworks involve multiple, conflicting objectives. Multiple objective decision analysis (MODA) and Value-Focused Thinking (VFT) have been used to help decision makers allocate resources more effectively and efficiently. Although there are several excellent textbooks and documented studies, ineffective decision maker/stakeholder interaction, modeling errors, and poor analysis practices are common. We identify several pitfalls that decision analysts should avoid. These pitfalls can result in inadequate problem definition, faulty analysis, confused decision makers, and poor decisions. After identifying the common pitfalls, we propose best practices to avoid these pitfalls. While we focus on decision analysis, many of these pitfalls can occur in any operations research study.

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT by Jennifer Austin and Ian M Mitchell The United Kingdom Ministry of Defence Equipment Capability Customer has used Decision Conferencing based on the Multi Criteria Decision Analysis software package Equity since 2001 to prioritise proposed changes to its Equipment Plan, in the light of prevailing assumptions about the security environment and financial conditions. This application reaches across multiple domains and levels of command, to develop a shared understanding of the problem, generate a sense of common purpose amongst the stakeholders and instil a commitment to the way forward. The Decision Conference is a strategic forum reflecting the opinions expressed in it. Central direction provided coherence to reflect the key concerns in enough detail to allow discrimination between the options. Evidence is key to informed and robust results. The process requires capacity in the form of trained facilitators and analysts, with software and hardware. There remain challenges to successful application but it endures as an example of the effective application of Value Focused Thinking.

Executive Summaries

THE USE OF DYNAMIC DECISION NETWORKS TO INCREASE SITUATIONAL AWARENESS IN A NETWORKED BATTLE COMMAND by Gerald C. Kobylski, Dennis M. Buede, John V. Farr, and Douglas Peters Having the right information at the right time is a critical component of combat power. Tomorrow’s information-enabled commanders will have much more information than today’s commanders. However, the information that is critical in one battle is much less important in another. The ability to quickly organize and analyze available data to make time-critical decisions will be imperative for the success of battlefield commanders of Future Combat Systems (FCS) Brigade Combat Teams (BCTs). Current decision support tools are

Page 3

EXECUTIVE SUMMARIES

not mature enough to facilitate a good situational understanding required by an FCS (BCT) commander. Dynamic Decision Networks (DDNs) are a good approximation to dynamic programming, the “gold standard” in decision making optimization. DDNs are an ideal aid for decision makers faced with multiple uncertain variables and conflicting objectives. They are designed to allow for the processing of diverse information of varying quality and fidelity. These traits make them a good alternative for military decision makers. In this paper, we de-

Page 4

scribe DDNs with particular emphasis on the applications and utility of DDNs to the FCS (BCT) commander. This paper shows that DDNs are ideally suited to assist a commander with competing demands and abundant information. The case studies presented in this paper demonstrate how DDNs resolve these competing demands and how it uses all available information to optimize the decision space. Ultimately, the goal is to have each piece of hard-won information available to a commander be used appropriately to make time-critical decisions.

Military Operations Research, V13 N2 2008

INTRODUCTION The Department of Defense is facing critical future challenges which are complex, fraught with uncertainty, and involve significant national consequences. The DoD is under pressure to reduce forces and military resources, yet committing those forces and resources worldwide. Meanwhile, the DoD is modernizing its force structure while struggling to maintain its current system inventory. The DoD is also facing a range of threats requiring sometimes entirely new modes of combat, new methods of training, and new paradigms to developing plans accomplish the assigned missions. As Lescher (2008) discusses, military and defense decision making under uncertainty is a crucial skill in military senior leadership to face these complex challenges. His comments highlight the explicit need for systematic thought when dealing with complex, high-risk, long-term defense and homeland defense challenges. Since World War II, the military has made use of the quantitative methods of operations research as a means to structure their systematic thought processes and provide insight to decision makers facing complex problems. The Military Operations Research journal has been a leading forum for publication of operations research and accounts of successful military applications. However, quantitative modeling on its own is unlikely to meet the challenges of the future. As Keeney (1999) posits, “[t]he fact is that no quantitative model can be developed or used without a qualitative foundation that describes what is important to include in the quantitative model.” Understanding the fundamental concerns within a decision making context, the basic values of stakeholders and decision makers is crucial to modern decision making and a fundamental tenet of decision analysis and value-focused thinking (VFT) (Keeney, 1992) In fact, values pervade decision-making in all fields and are of particular importance in the complex military decision making scenarios involving high levels of uncertainty and significant potential consequences. Most decision-making processes have some method to evaluate the consequences of the decision alternatives being considered. Value-focused thinking is a philosophy that starts by iden-

Military Operations Research, V13 N2 2008

tifying values and then using these values to evaluate and then improve the set of alternatives provided to decision makers. VFT uses the mathematics of Multiple Objective Decision Analysis (MODA) (Keeney & Raiffa, 1976; Kirkwood, 1997) to quantify the values and evaluate the alternatives. This technique is appropriate when we have conflicting objectives, complex alternatives, and major sources of uncertainty. As the DoD has come to increasingly require understandable and traceable analysis, VFT/MODA analysis has gained increasing acceptance. This special issue of Military Operations Research recognizes the increasing potential of VFT/MODA for military decision-making. The papers in this issue balance methodology and application. The first two papers provide methodological guidance. The final two papers recount applications of decision analytical techniques for military planning and operations. The first paper in this issue is “Applying Value-Focused Thinking” by Ralph Keeney. This paper focuses on the “relevance and usefulness of the key concepts used in applying value-focused thinking.” It provides a concise review of the key concepts of VFT and follows this with a discussion of the some challenges associated with using the VFT approach. The second paper is “Avoiding Common Pitfalls in Decision Support Frameworks for Department of Defense Analyses” by Dillon-Merrill, Parnell, Buckshaw, Hensley and Caswell. The authors reflect on their experience in the field and note that VFT/MODA are not failsafe techniques; poorly formed analyses can result without the use of the best practices. To help improve current practice, this paper reviews relevant applications of VFT/ MODA for DoD analyses and describes the commonly occurring pitfalls associated with each step of the VFT/MODA analytical process and recommends best practices to avoid these pitfalls. Austin and Mitchell provide, “Bringing Value-Focused Thinking to Bear on Equipment Procurement.” This third paper in the issue describes the use of VFT/MODA for prioritizing United Kingdom Ministry of Defence equipment procurement. The authors relate how the Equity software system is used as the technical basis for a group decision conferencing system used

ValueFocused Thinking and the Challenges of the Long War Gregory S. Parnell, Ph.D. United States Military Academy [email protected]

Raymond R. Hill, Ph.D. Wright State University [email protected]

Page 5

VALUE-FOCUSED THINKING AND THE CHALLENGES OF THE LONG WAR

over a five year period. They also discuss challenges and pitfalls associated with VFT and their distributed decision making system. The final paper in this issue is “The Use of Dynamic Decision Networks to Increase Situational Awareness in a Networked Battle Command” by Kobylski, Buede, Farr and Peters. A dynamic decision network (DDN) is based on Bayesian Networks (BN) and Influence Diagrams (ID) and is used to solve real-time problems in dynamic environments. Objectives are defined in an objectives hierarchy. DDNs are then used to assess the value of information. The DDN and BN are described and demonstrated on prototypical problems associated with the U.S. Army’s Future Combat System. Information from a commander’s sensor assets and from different levels of command within the FCS can be integrated (fused) by the DDN. The payoff to the FCS warfighter for using DDNs is that large amounts of information will become an asset rather than a nuisance.

Page 6

References Keeney, Ralph L., 1992. Value Focused Thinking: A Path to Creative Decision Making, Harvard University Press, Cambridge, Massachusetts. Keeney, Ralph L., and Howard Raiffa, 1976. Decisions with Multiple Objectives, Wiley, New York. Reprinted by Cambridge University Press, 1993. Keeney, Ralph L. 1999. Foundation for Making Smart Decisions. IIE Solutions, May 1999, pp. 25–30. Kirkwood, Craig W. 1997. Strategic Decision Making: Multi-objective Decision Analysis with Spreadsheets, Duxbury Press: Belmont, CA. Lescher, William K. 2008. Taking Risks. Armed Forces Journal, Vol. 145, No. 8, pp 27–31, 47.

Military Operations Research, V13 N2 2008

ABSTRACT

D

ecisions are made with the purpose of achieving something, which is defined by the values for the given decision. Objectives specify those values in detail. Value-focused thinking is designed to keep your attention throughout the decision process on what you hope to achieve. Its purposes are to help define in a clear and explicit manner the decision problem that should be addressed and to stimulate creative thinking in modeling the decision and throughout the decision process. There are three primary ways that applying value-focused thinking can lead to better decisions. First, value-focused thinking often results in a better set of objectives for evaluating the alternatives, as generating objectives is an explicit focus of value-focused thinking. Second, value-focused thinking facilitates the creation of alternatives, some of which might be better than the readily apparent alternatives that have been previously recognized. Third, value-focused thinking proactively defines decision opportunities that are more attractive to face than the decision problems forced upon us. This paper describes the relevance and usefulness of the key concepts used in applying value-focused-thinking that are often missing in other approaches. It also discusses the challenges and opportunities in applying these concepts.

FOUNDATIONS FOR VALUEFOCUSED THINKING The only purposeful way that anyone can influence the future is by the decisions they make. The rest of the future just happens. This is true for individuals, businesses, government, and military services. Therefore, decisions are important, as we naturally want to influence our future. This desire however is not simply to have influence, but to have influence for the better. Indeed, operations research was originally defined as the science of decision making (Morse and Kimball, 1951) and now operations research is defined as the “science of better” by its professional organization INFORMS (INFORMS, 2007). The concept of better, and what is better in a particular decision context, is critical to decision-making. The operationaliza-

Military Operations Research, V13 N2 2008

tion of this concept in any decision context is the foundation for value-focused thinking. The concept of better is made explicit for a specific decision through a series of steps. First, one needs to specify the values on which the notion of better is grounded. These values are then made more precise by specifying objectives that define them. These objectives should be the basis of thinking about decisions and for appraising alternatives in that decision situation. Of course, it is necessary to have at least two alternatives in order to have a decision, but these alternatives are important only because they are the ways to achieve the objectives. Indeed, the objectives can be useful in creating alternatives and for even creating some decision situations that we could beneficially face. Decision situations that we create because we recognize the potential to do something better are referred to as decision opportunities. There are also decision situations, referred to as decision problems, which typically occur due to circumstances beyond our control. In these situations, there often are at least two alternatives that become apparent that tend to call our attention to a pending decision. Thus, for decision problems, at least some alternatives come prior to specifying the objectives to appraise the alternatives. However, even for decision problems, it is useful to specify a complete set of objectives to be achieved in the decision context, and these objectives may suggest a broader decision context, i.e., a decision opportunity, as well as additional alternatives to consider. The set of objectives that define our values for any given decision situation and the set of alternatives to achieve them provide the decision frame for a particular decision. At this stage, a reasoned choice can sometimes be made using thinking only. In other cases, it is useful to have an analysis to provide additional information and insights to make an informed, and hopefully better, choice. For these latter situations, an objective function should be constructed that incorporates all of the relevant fundamental objectives to evaluate the alternatives. Consistent with value-focused thinking, the objective function should be a multiattribute utility function if uncertainties are significant, as they often are, or a multiattribute value function when the uncertainties are not significant. Many books

Applying ValueFocused Thinking Ralph L. Keeney Fuqua School of Business Duke University [email protected]

APPLICATION AREAS: Measures of Effectiveness; Analysis of Alternatives; and Decision Analysis OR METHODOLOGY: Decision Analysis

Page 7

APPLYING VALUE-FOCUSED THINKING

and articles address the structuring of objective functions, and this task is much better defined than the previous creative tasks of generating objectives and alternatives from scratch. Still, many approaches to define objective functions do not begin with a solid foundation of the fundamental objectives and do not explicitly address the value tradeoffs among objectives and the risk tolerance that are inherent parts of any complex decision. It is more common to think through decision situations using alternative-focused thinking, which is more reactive than proactive. The thinking begins once at least two alternatives are identified or circumstances essentially force thinking about a decision problem. Subsequently, a set of objectives may be identified to appraise the given alternatives and then the best of the alternatives is chosen. There are essentially three ways that value-focused thinking can lead to better decisions than alternativefocused thinking. First, value-focused thinking can often provide a better set of objectives for evaluating the alternatives, as generating objectives is an explicit focus of value-focused thinking. Second, value-focused thinking promotes and facilitates the creation of alternatives, some of which might be better than the readily apparent alternatives that have been previously recognized. Third, value-focused thinking helps define decision opportunities that are more attractive to face than the decision problems forced upon us. Usually these decision opportunities are more broadly framed than decision problems that first become apparent. These three mechanisms are achieved by applying the key concepts of value-focused thinking. This paper will focus on the relevance and usefulness of the key concepts used in applying value-focused thinking that are often missing in other approaches. Thus it focuses on the qualitative foundations that are critical to both (1) clear thinking about a decision and reasoned appraisal of the alternatives and (2) sound analysis if an analysis is desirable. Key Concepts of Value-Focused Thinking outlines several of these key concepts and describes their significance. Next Challenges and Opportunities in Applications discusses the challenges and opportunities in applying these concepts. Specifically, it focuses on some of the important technical, political,

Page 8

organizational, and emotional challenges to applying value-focused thinking. The final section offers some concluding thoughts.

KEY CONCEPTS OF VALUEFOCUSED THINKING There are several concepts of value-focused thinking that are important to define. In a sense, one can think of this section as a glossary of some of these concepts with brief examples to illustrate their meaning. Notions related to these concepts are used in many approaches to facilitate decision-making. What is distinctive about value-focused thinking is the specificity with which each of these concepts is defined and the time and effort explicitly focused on each of these concepts in applying them to any specific decision.

Values The values for any decision situation are essentially a list of all that we care about achieving in that decision context. No constraints are placed on how that information is described or organized. The main concern is to get that which is valued out of the heads of those individuals who understand the decision and why it is worthy of thought. Once listed on paper or electronically, then one can constructively organize all those thoughts into a coherent representation of what is valued. Thus, redundancy is not a shortcoming in the initial specifications of values but omissions are a significant shortcoming. Consider the decision situation where the Department of Homeland Security is interested in reducing the risk of terrorism to U.S. ports. Some values may be stated as follows: make the ports safe, worker risks, public security, minimize national disruption, port closures, the time that the port might be closed, economic consequences, and political effects. The list should not require consistency in the way terms are stated or in their specificity. All of these relate to things we care about and so are legitimate values for our decision.

Military Operations Research, V13 N2 2008

APPLYING VALUE-FOCUSED THINKING

Objectives Objectives articulate the stated values more specifically and in a coherent manner focused on decision making. Typically an objective is best stated using a verb and an object. For the concern about potential terrorist activities in ports, the value “worker risks” should be stated with three objectives, namely minimize worker fatalities, minimize worker injuries, and minimize worker illnesses. It may also be useful to separately distinguish the types of injuries and illnesses that we are concerned about as their significance may vary greatly. The value “the time that the port might be closed” can be articulated with the objective minimize the length of port closure. For the value “political effects”, we would want to specifically ask individual’s knowledgable about the decision what political effects were of concern. This need to clarify is a natural process that follows from the initial listing of values. Once values are clarified, we should be able to write meaningful objectives to capture the relevant concerns in objectives for appraising alternatives.

Fundamental, Means, and Process Objectives There are three types of objectives that are important to distinguish in value-focused thinking. Fundamental objectives describe the fundamental reasons that we are interested in a decision problem. The set of fundamental objectives should collectively describe all of the potential consequences we care about for evaluating a set of alternatives. A good set of fundamental objectives should satisfy several properties such as being complete and avoiding overlap to preclude double counting (Keeney, 2007). Means objectives are objectives whose achievement influences the degree to which the fundamental objectives are achieved. As an example, a means objective regarding port security would be to minimize any radiation exposure to port workers due to a dirty bomb attack (Rosoff and von Winterfeldt, 2007). The reason why the radiation exposure is important is that greater radiation exposure could lead to more

Military Operations Research, V13 N2 2008

illnesses and fatalities to workers, which are described by two fundamental objectives in the decision. Process objectives concern how a decision is made rather than how good the alternatives may be. Hence they pertain to a different decision problem, as how a decision is made is different from what alternative should be chosen. As examples, one process objective of the Department of Homeland Security regarding potential risks at ports would be to use all the relevant information to evaluate alternatives and another process objective would be to involve the stakeholders such as port workers and companies shipping into the port in the process of decision making about port security.

Attributes An attribute is a scale used to indicate the degree to which an associated objective is met. Sometimes there are obvious scales. For instance, one objective of evaluating port security alternatives would be to minimize the total cost of security. One attribute to measure this is the annual dollar cost of that security and another attribute is the net present value of the cost of security over a specific time period. One would expect that once clearly inferior alternatives were eliminated, the more money spent on security, the safer the port would be. Hence, as mentioned below, value tradeoffs will need to be made about the relative significance of specific achievements on different objectives. There are basically three types of attributes that are used to evaluate the achievement of objectives (Keeney and Gregory, 2005). The examples above are natural attributes which directly indicate the degree of achievement on a scale that is in common use. Typically these attributes can be counted or measured as with dollars or days. A related type of attribute is a proxy attribute, which differs only in that it does not directly indicate the achievement of an objective but indirectly is related to that achievement. For instance, if the main mechanism of illness from a terrorist attack on ports was deemed to be through potential dirty bombs, then one might expect there would be a correlation between the amount of radiation

Page 9

APPLYING VALUE-FOCUSED THINKING

emitted by a dirty bomb and the health effects that would accrue to workers in the port area because of it. An attribute defined as the amount of radiation released would be a proxy attribute, as it does not directly measure the number of health affects. The third type of attribute is a constructed scale. These must be used when there is no natural or proxy scale for an objective. An example might be for the objective of minimize political effects. Here six distinct levels of political affects might be described by six different paragraphs. One could then rank these paragraphs from best to worst and this would be a scale. For a constructed scale, it is important that the descriptions for each of the constructed levels be meaningful to all the decision makers concerned about a decision and that the range of descriptions cover the possible ranges of effects in the problem.

Objective Functions and Value Tradeoffs If there are ten fundamental objectives for a decision, then there should be ten attributes, namely one corresponding to each objective. Hence, any possible consequence of the alternatives can be described in terms of a tendimensional vector that describes the levels on each of the ten objectives. An objective function is a mathematical function that takes each possible ten-dimensional vector and aggregates it into an overall evaluation. There are numerous research results on this particular problem (Fishburn, 1965; Krantz et al., 1971; Keeney and Raiffa, 1976; von Winterfeldt and Edwards 1986). Basically, the aggregation of ten descriptors into one number is to insure a consistent evaluation of alternatives in a meaningful manner. In order to do this, the construction of the objective function must be done consistent with reasonable values for the decision being faced. Reasonable values should be transitive, meaning that if one consequence is preferred to a second and if that second consequence is preferred to a third, then the first consequence should be preferred to the third. Transitivity and other reasonable properties (von Neumann and Morgenstern, 1947) are incorporated in

Page 10

multiattribute utility functions (Keeney and Raiffa, 1976) and multiattribute value functions (Dyer and Sarin, 1979). To construct any objective function with more than one objective requires that value tradeoffs among objectives be explicitly addressed. A value tradeoff specifies how much achievement on one objective is exactly equivalent in value to a specific achievement on a second objective. This is done for all pairs of objectives so one can relate any achievement on one objective in terms of its value to any achievement on another objective.

Alternatives Alternatives have the same notion in valuefocused thinking as in many decision models. An alternative is any one of a mutually exclusive and collectively exhaustive set of choices that can be selected to solve or resolve a decision problem. An essential property of an alternative is that its selection must be totally under the control of the decision making entity. For example, an alternative could not be defined as “have no terrorist attacks in U.S. ports”. Whether or not such attacks occur is not totally under the control of decision makers in our country. We could have objectives to minimize the likelihood of an attack, to minimize the number of attacks, and/or to minimize the seriousness of attacks. The alternatives in the situation would necessarily involve elements that are totally under our control that can influence how well we achieve our objectives.

Decision Frame A decision frame is basically a concise representation of the decision problem being addressed. Initially, the decision frame is typically a set of words such as ensure that U.S ports are safe from terrorist risks. Subsequently, the set of fundamental objectives and the set of alternatives constructed for the decision serve as a more precise decision frame.

Military Operations Research, V13 N2 2008

APPLYING VALUE-FOCUSED THINKING

CHALLENGES AND OPPORTUNITIES IN APPLICATIONS Value-focused thinking includes specific tasks to explicitly address key elements in structuring decision situations. The intent is to provide a better basis for understanding and thinking about the decision and to provide a sound foundation for any subsequent analysis. Most of these tasks require little mathematics if any, but they require disciplined and hard thinking. These tasks don’t have right or wrong answers, and the quality and clarity of thinking typically correlates with the time and effort put into the tasks. This section discusses some of these key tasks and outlines the challenges to doing them well.

Generating Objectives Generating a comprehensive set of fundamental objectives for a decision problem is a difficult task. One cannot simply ask the decision maker or decision makers to write down their objectives and then proceed with the list provided. In applications, I have always tried to stimulate decision makers to think more broadly about what objectives they have for a given decision situation (Keeney, 1992). More recently with colleagues at Duke, we have done experiments to indicate how well individuals can generate their objectives for decisions that matter to them. In Bond et al. (2008), we examined two different decision situations facing MBA students. The first two experiments separately involved regular MBA students and executive MBA students selecting a school for their education. These were retrospective studies as the participants had already chosen an MBA program, but the decision was relevant in that they were knowledgeable about the decision. The third experiment involved MBA students selecting internships for the summers between their first and second years. These students were facing this problem when we interacted with them. Each of the three experiments had the following basic structure. The participants were first asked to write down all the objectives that

Military Operations Research, V13 N2 2008

mattered to them in the stated decision situation. They were given a week for this task. They were then given our comprehensive list of objectives for that problem and asked to check all those they felt were relevant. The terminology used in writing their own objectives was often different from the terminology used in our comprehensive list of objectives. For instance, they could have written what we defined earlier as values and our list contained objectives using verbs and nouns. Hence we asked the students to map their list of objectives onto the set of objectives on the comprehensive list that they personally had checked. This gave us a way to identify how many of their checked objectives from the comprehensive list were included in their original list. There are two notable results from this research. First, the participants on average originally listed less than half of their self-proclaimed objectives from the comprehensive list in all three studies. Because participants could have listed all their important objectives and simply omitted less significant objectives, the participants were subsequently asked to prioritize their checked objectives on the comprehensive list. The second key result was that the objectives that were missed were about as important as those that were identified. Thus, the sets of objectives had serious shortcomings in both quantity and quality. In conjunction with these experiments, we examined some consulting work done with 12 executives at Seagate Software Corporation in the mid 1990’s (Keeney, 1999). In this situation, each of the executives was individually interviewed for an hour about all of the objectives of the newly-founded organization. Subsequently, I organized each of their objectives and integrated their viewpoints together to include all the objectives the different individuals had mentioned. All 12 felt that the final result represented what aspirations they had for the organization. Yet upon examination, each of the 12 identified on average only 36 percent of the key objectives subsequently ratified for the organization. There are many challenges that can cause an initial listing of fundamental objectives to be inadequate. The reasons why individuals have a difficult time listing all of their important

Page 11

APPLYING VALUE-FOCUSED THINKING

objectives is a topic my Duke colleagues and I hope to investigate. In addition, there may be political and organizational concerns that would suggest omitting certain objectives. For instance, if one is considering different military campaigns, if including the objective of minimizing the loss of personnel due to friendly fire is included, this clearly recognizes that friendly fire casualties may occur. Although realistic, it may not be politically astute to include such objectives if elected officials and the public have immediate accessibility to the work.

Distinguishing Fundamental, Means and Process Objectives It is often difficult to distinguish between fundamental, means, and process objectives. The easier of the situations is to identify the process objectives, as they concern how one should make a specific decision rather than what alternative should be chosen for that decision. Recognizing the distinction between these two different decision contexts is important. One challenge that sometimes occurs is a perceived need by decision makers to make the decision in a way such that their choice in the fundamental decision is understood and accepted by those who may need to approve it. Thus, in the minds of the decision makers, what to do and how to do it decisions may be intrinsically intertwined. Complexities can often occur in identifying what objectives are means and what objectives are fundamental. There is sometimes a tendency in many organizations to use means objectives in place of fundamental objectives because, among other reasons, they are easier to use. In considering different alternatives for pursuing a conflict, it is easy to check whether certain preparations were made, if they were on time, and if they done correctly. It is much harder to evaluate whether all of the possible consequences of the conflict once begun were anticipated. The quality of preparation is naturally a means to improve the consequences of the conflict. However, if the consequence of the conflict happens to turn out poorly, then others can readily raise many questions about preparations. Thus, it is important to document prep-

Page 12

aration and to relate the level of preparation to the anticipated level of success in the conflict, but only the fundamental objectives concerning the possible conflict consequences should be used to evaluate alternatives. It remains the case that one wants to use the fundamental objectives for the evaluation of the alternatives and not the means, as the means are only relevant for their impact on the ends.

Specifying Attributes A technical challenge to the specification of many attributes is when there is no natural attribute. It is easy to imagine that one objective of a proposed new weapon is that the weapon be as effective as possible. However, defining exactly what effective means by identifying measurable attributes is still a difficult task. Part of effectiveness could relate to its power, part to its accuracy, part to its ultimate consequences, and part to any negative implications to our forces by its use. In such a case, it would likely be fruitful to specify each of the parts of effectiveness with a separate objective and then select an attribute for each. Then, still there may be organizational, political, or psychological difficulties of using a particular natural attribute such as the number of enemy fighters killed by a certain weapon, yet this may be one purpose of the weapon. Information, and therefore information systems, is critically important to military operations (Buckshaw et al., 2005). Thus, protecting these information systems and maintaining their integrity is an important issue to our military services. Hamill et al. (2005) outline an approach to evaluate information assurance strategies, where one of several objectives that they discuss concerns operational capabilities. Components of this that they specify are functionality, interoperability, efficiency, and convenience to users. It is useful to be specific about the meanings of each of those terms by selecting or constructing an attribute to measure them. Indeed, Hamill et al. (2005) go into detail to define each of these terms for the problem investigated and suggested measures that may be appropriate. The key point to stress is that it is a legitimate task to specifically focus

Military Operations Research, V13 N2 2008

APPLYING VALUE-FOCUSED THINKING

on how one should measure different objectives. It requires work and effort. To assume, and more significantly act as if, one can simply choose a measure without much thinking and then go on is completely inappropriate.

Quantifying Values If one has a complete set of fundamental objectives, then it is reasonable that the objective function be additive over the set of fundamental objectives (Keeney, 1992). In such cases, the appropriate additive multiattribute utility function is

u共x1,x2,. . .,xn兲 ⫽ k1u1共x1兲 ⫹ k2u2共x2兲 ⫹ . . . ⫹ knun共xn兲, where xi is the description of the consequence with respect to the ith objective, ui is a utility function over the ith objective, and ki is the scaling factor for the ith objective. Indeed, one should strive to have a complete set of the fundamental objectives for any decision problem so that the additive objective function is appropriate. With an additive utility function, then one needs to assess the component utility functions and the scaling factors ki, i ⫽ 1,. . .,n. The most common significant shortcomings in developing any objective function are the inadequacy of the set of objectives, the inappropriateness of the set of attributes, and an illogical set of scaling factors. The challenges in doing the former two tasks well are discussed above. The inadequacies in developing the set of scaling factors ki are well documented (see for example Keeney, 2002). The major shortcoming is that the inherent value tradeoffs are often not logically addressed when assessing the scaling factors. Rather, questions that do not pertain to the substantive value tradeoffs are asked about the “relative importance” of the various objectives. These questions are typically something like “Is objective A or objective B more important?”, which does not have a clear meaning. To ask unambiguous questions about value tradeoffs, one needs to include the relative amount of achievement on the various objectives. An example may be helpful.

Military Operations Research, V13 N2 2008

Suppose there are a number of different missile systems that may be developed for a particular purpose. These systems can all be ready in the same amount of time and are very similar except in terms of their costs and their accuracy. Further assume that an appropriate attribute for costs is millions of dollars, and an appropriate attribute for accuracy is the distance in feet that the missile misses its designated target. In this context, the scaling factors in an additive two-attribute utility function would simply be kc and ka on the objectives of costs and accuracy, respectively. One could ask knowledgeable individuals which is more important, the cost of the weapon system or its accuracy. What one should ask is whether a difference in cost from say $500 million dollars to $900 million dollars is more significant, equally significant, or less significant than, for instance, a change in accuracy from 20 feet to 100 feet. To make this point, suppose that we had asked the question, “What is more important, the cost of the weapon system or the cost of the weapon system”? Anyone would recognize that this question is idiotic. But if the question were changed to ask what is more important, “a change from $500 million dollars to $700 million dollars” or “a change from $300 million dollars to $900 million dollars”, then this question would be very meaningful. It is the amounts of the achievement on the objectives that are necessary to make the question specific. These questions focus logically and unambiguously on the value tradeoffs necessary to construct an objection function. Even when asking the right type of questions, there are naturally challenges to clearly articulating value tradeoffs, several of which are discussed in Keeney (2002). One significant error in quantifying values is selecting a simplistic objective function, often for the ease of analysis, that incorporates ridiculous value judgments. One example concerned the storage of antidotes for a particular disease that terrorists may attempt to spread in an urban area of millions of people. The analysis used only one objective, which concerned the amount of time from the initial spread of the disease until individuals had the antidote. Furthermore, they specified that the objective function should be the time until the last person

Page 13

APPLYING VALUE-FOCUSED THINKING

could receive the antidote. With very little thought, it is easy for most people to recognize that if an area has 4 million people, all of them receiving the antidote within two days except for one individual who receives it in 15 days is probably greatly preferred to an alternative where everybody receives the antidote in 10 days. Yet with the stated objective function, the latter is better, and seemingly much better (i.e., 10 days compared to 15 days). In these situations, the results of an analysis are essentially worthless at best, and seriously misleading at worst. Furthermore, the results of such analyses are recognized by potential users of the analysis as ridiculous. However, some of these users don’t understand that the ridiculousness is due to shoddy work, and conclude that models are not generally useful. The point is that a good objective function requires, among other things, a good set of fundamental objectives, and subsequently clear thinking and a dedicated effort to construct a useful objection function.

Creating Alternatives The frequent and significant shortcoming of a set of alternatives is that they often are too narrowly specified. Sometimes this is referred to as thinking inside the box, and indeed sometimes it is thinking inside a very small box. One should think more broadly (i.e., outside the box) and generate a broader set of alternatives. If none of those alternatives turn out to be better than those that were created by thinking inside the box, then this should be apparent when appraisal, perhaps assisted with a formal evaluation, of the alternatives is conducted. There are two obvious facts concerning the set of alternatives considered. First, one can never select an alternative that is not considered and second, the alternative selected can be no better than the best in the set considered. Thus, if by creating alternatives, you are able to create one that is better than any originally in the set, this could have a much more significant impact than choosing the best of the original set. Value-focused thinking includes a fruitful way to stimulate the creation of alternatives. For each of the means and fundamental objec-

Page 14

tives, asking what type of alternative might perform exceptionally well on that single objective can stimulate thought that will lead to creative alternatives. For instance, consider a fire department examining how to best improve its operations. Two of its fundamental objectives are to minimize loss of life and minimize property losses due to fires. A means objective to these is to minimize the response time of fire equipment to fires. A means to better achieve this objective is to dispatch equipment as quickly as possible after the fire occurs. Finally, a means to that objective is to get the information quickly to the fire departments so they can respond. Looking at this last means objective naturally suggests that better information systems that identify and/or communicate the occurrence of fires sooner to the fire department units that should respond would be potential alternatives for the problem. There are of course challenges to creating alternatives. As in many situations, creating alternatives is hard, because it requires creative thinking. Also there may be turf battles within an organization and some individuals may want to limit the alternatives considered, perhaps because they think that some of these may turn out to be better than their preferred alternative. And they would not be willing to change their mind about their preferred alternative even if the evidence suggested that they should. A challenge that limits the creation of alternatives is the use of constraints. Often in the statement of the problem, constraints are included such as the weapons system must be delivered within a one-year period. This would naturally rule out a tremendously effective and relatively inexpensive weapons system that would take 13 months to produce. If the constraint were not included, this alternative could be evaluated along with others that did take less than 12 months. If it turned out when addressing the value tradeoffs of the problem that saving a lot of money and having the more effective system was worth waiting the extra month, then the alternative that would have been excluded by the constraint may be chosen. On the other hand, if the value tradeoff suggested that it was not worth waiting that one month for the better weapons system, then a

Military Operations Research, V13 N2 2008

APPLYING VALUE-FOCUSED THINKING

sound logic for that would be available and the alternative chosen would have been the same as with the constraint. In other words, allowing objectives and value tradeoffs to take care of the evaluation, rather than the use of constraints when one is not clear of what implications they have on the quality of the choice, is a useful way to avoid the shortcomings of constraints and it naturally fits with value-focused thinking.

Framing a Decision Often, a decision is framed by others in terms of the alternatives that might be chosen. If this is the starting point, one should immediately proceed to specify the values and then the objectives that seem appropriate for the decision. By following means-ends relationships to understand why we care about this decision in a fundamental sense, we can often broaden the problem being addressed. And this broadening allows us to include a greater set of alternatives. There has been much interest in requiring all major passenger aircraft in the United States to install a system that would cause handheld heat seeking missiles fired at airplanes by terrorists to miss the target airplane. The costs are estimated to be in the range of tens of billion dollars for such a system. Such a system does have appeal as it gives the public the feeling that airplanes are safe. But when one more broadly thinks about the problem, there are other weapons that could bring down airplanes such as a bomb planted on board or non-heat seeking projectiles fired from land at an aircraft. The issue is that the billions of dollars may not make airplanes much safer, because terrorists could simply change the mechanism they would use to attempt to cause commercial airliners to crash. In any case, it seemed more appropriate to frame the problem as one of minimizing the risks of commercial airplanes being shot down by terrorists and minimizing the direct and indirect consequences of such events should they occur (von Winterfeldt and O’Sullivan, 2006). In value-focused thinking, it is sometimes useful to simply think of something that should

Military Operations Research, V13 N2 2008

be accomplished and then specify the dimensions of accomplishments with the set of fundamental objectives. This is what I earlier referred to as a decision opportunity, as this process frames a decision by specifying a set of objectives and perhaps even an objective function that describes what we would like to achieve. This may occur before there are any alternatives. Subsequently the alternatives can be created as discussed above. The biggest challenge to creating appropriate decision frames is simply being reactive in all the decisions that are being faced rather than also stepping back and being proactive. Proactive begins with asking first what we really would like if we could have all that we want. This is the beginning of the thought process that stimulates decision opportunities and proactively allows us to face them. It should be recognized that when some proactive decisions are addressed and addressed well, there are often fewer reactive decisions that one is forced to consider. This simple personal example makes the point well. For most of us, no one is forcing us to become or remain physically fit and healthy. However, we can declare this as a decision opportunity. Our set of alternatives would include possibilities that affect any of our eating and exercise habits among others. If we choose such alternatives and pursue them, we may be more fit than otherwise. As a consequence, 5 or 10 years from now, we might not all of a sudden be forced into the reactive mode of deciding where to get triple bypass surgery or other very undesirable decision problems. In a sense, proactive decision opportunities can be thought as including more prevention, whereas reactive decision problems can be thought of as cures.

CONCLUSIONS The effort to analyzing decision situations can be broken into two parts: what is the problem and what is the solution. Most of the decision modeling tools and techniques used to analyze those decisions, whether they are decision analysis or decision support systems or mathematical programming, focus on analyzing the solution once the problem is clearly

Page 15

APPLYING VALUE-FOCUSED THINKING

stated. Indeed, many of the books and papers on decision modeling could start out with a sentence like “Given a decision problem of the following type with these objectives and the following set of alternatives, then . . ..” There is certainly worthwhile analysis to be done after the “then”. However, for many decisions the more creative and important part of the analysis concerns what is the problem. In this paper, what is the problem means defining the decision frame, specifying the objectives relevant to it, and creating a broad range of potentially effective alternatives. Value-focused thinking, which mainly concerns “what is the decision”, should not be compared to other approaches that address different circumstances. The main competitor with value-focused thinking, which is a systematic and traceable process, is an implicit process that involves simply stating the decision frame and listing the objectives and alternatives. So value-focused thinking addresses the large challenge of trying to make sense out of a very messy situation. Its purpose is to organize that situation in a way that subsequent thinking and analysis can be insightful and useful. In that it addresses several hard issues in any decision situation, there are certainly challenges to how it is done and to how it is done well. These are the challenges outlined in Challenges and Opportunities In Applications. In general, any shortcomings due to those challenges can be reduced by recognizing that the specific aspect of the overall decision is worthy of serious thought, time, and effort. It is reasonable to have a concentrated effort on specifying objectives. It is reasonable to spend time creating alternatives. And it is reasonable to carefully think about your decision situation, and indeed to consider the relative usefulness of different decision frames for a given decision situation. With this time and effort, one needs to rely heavily on common sense and creativity. Finally, it is worth restating that the basis for good value-focused thinking is a very sound set of objectives. If an objective is relevant to a decision, then it doesn’t make sense to neglect it by concluding that we have no data on it or that we could not get data on it. If it is a relevant objective, it should be included. The set of objectives are a basis for better addressing

Page 16

all of the challenges subsequently mentioned in the challenges and applications section. Specifically, they are useful for quantifying values, creating alternatives, and framing the decision situation. In my mind, that is why this way of approaching decisions is called value-focused thinking.

REFERENCES Bond, S.D., K.A. Carlson, and R.L. Keeney, 2008. Generating Objectives: Can Decision Makers Articulate What They Want?, Management Science, Vol. 54, No. 1, pp.56 – 70.. Buckshaw, D.L., G,S, Parnell, W.L. Unkenholz, D.L. Parks, J.M. Wallner, and O.S. Saydjari, 2005. Mission Oriented Risk and Design Analysis of Critical Information Systems, Military Operations Research, Vol. 10, No. 2, pp. 19 –38. Dyer, J. S,. and Sarin, R. 1979. Measurable Multiattribute Value Functions. Operations Research, Vol. 27, No. 2, pp. 810 – 822. Fishburn, P.C., 1965. Independence in Utility Theory with Whole Product Sets. Operations Research, Vol. 13, No. 2, pp. 238 –257. Hamill, J.T., R.F. Deckro, and J.M. Kloeber, 2005, Evaluating Information Assurance Strategies, Decision Support Systems, Vol. 39, No. 3, pp. 463– 484. INFORMS, 2007. www.informs.org Keeney, R.L., 1992. Value-Focused Thinking: A Path to Creative Decision Making, Harvard University Press, Cambridge, Massachusetts. Keeney, R.L., 1999. Developing a Foundation For Strategy at Seagate Software, Interfaces, Vol. 29, No. 6, pp. 4 –15. Keeney, R.L., 2002. Common Mistakes in Making Value Tradeoffs, Operations Research, Vol. 50, No. 6, pp. 935–945. Keeney, R.L., 2007. Developing Objectives and Attributes, Advances in Decision Analysis, W. Edwards, R.F. Miles, Jr., and D. von Winterfeldt, eds., Cambridge University Press, New York.

Military Operations Research, V13 N2 2008

APPLYING VALUE-FOCUSED THINKING

Keeney, R.L., and R.S. Gregory, 2005. Selecting Attributes to Measure the Achievement of Objectives, Operations Research, Vol. 53, No. 1, pp. 1–11. Keeney, R.L., and H. Raiffa, 1976. Decisions with Multiple Objectives, Wiley, New York. Reprinted by Cambridge University Press, 1993. Krantz, D.H., R.D. Luce, P. Suppes, and A. Tversky, 1971. Foundations of Measurement, Vol. 1, Academic Press, New York. Morse, P.M. and Kimball, G.E., 1951. Methods of Operations Research, MIT Press, Cambridge, Mass. and John Wiley, New York. Parnell, G. S., R.L. Dillon-Merrill, and T.A. Bresnick, 2005. Integrating Risk Management with Homeland Security and Antiterrorism Resource Allocation DecisionMaking, pp. 431– 461 in The McGraw-Hill Handbook of Homeland Security, David Kamien, ed. New York, New York.

Military Operations Research, V13 N2 2008

Pruitt, K.A., R.F. Deckro, and S.P. Chambal, 2004. Modeling Homeland Security. Journal of Defense Modeling and Simulation, Vol. 1, No. 4, pp. 187–200. Rosoff, H. and D. von Winterfeldt, 2007. A Risk and Economic Analysis of Dirty Bomb Attacks on the Ports of Los Angeles and Long Beach, Risk Analysis, Vol. 27, No. 3, pp.533–546. von Neumann, J., and O. Morgenstern, 1947. Theory of Games and Economic Behavior, 2nd edition, Princeton University Press, Princeton, New Jersey. von Winterfeldt, D., and W. Edwards, 1986. Decision Analysis and Behavioral Research, Cambridge University Press, Cambridge, England. von Winterfeldt, D., and T.M. O’Sullivan, 2006. Should We Protect Commercial Airplanes Against Surface-to-Air-Missile Attacks by Terrorists?, Decision Analysis, Vol. 3, No. 2, pp. 63–75.

Page 17

RIST PRIZE 2009 CALL FOR ENTRIES David Rist Prize: The Rist Prize recognizes the practical benefit sound operations research can have on “real life” decision making and seeks the best implemented military operations research study from those submitted in response to this 2009 Rist Prize Call for Entries. This call solicits abstracts with letters of endorsement for implemented recommendations from studies or other operations researchbased efforts, e.g., analyses, methodology improvements, that influenced major decisions or practices. Entries for individuals or teams submitted in response to this call will be eligible for consideration for the Rist Prize. There are two cash prizes that may be awarded: $3,000 for first place (i.e. Rist Prize winner) and $1,000 for honorable mention. To be considered: ™ ™ ™

An unclassified abstract and letter of endorsement for the implemented recommendations from the study must be submitted to the MORS office and postmarked no later than Thursday, 15 January 2009. The abstract cannot be longer than 3 pages (8.5” by 11”, single sided, 10 point type minimum, .75” margins all around minimum). The letter of endorsement must be signed by an official at a Flag/General officer, Government SES, or company VP level (or equivalent) from the organization using the results of the analysis. The letter of endorsement must address at a minimum: ™ The importance of the problem ™ The contribution of the insight or solution ™ The impact of the result of the analysis

Please send the unclassified abstracts and a letter of endorsement to [email protected], include a complete MORS Abstract Disclosure Form 109 A/B. All forms and instructions are located on the website, click on the Rist link. www.mors.org From those entries submitted, the judges will select 3-5 finalists on or about Friday, 27 February 2009. If selected as a finalist: ™ A mentor will be assigned by MORS. ™ An annotated briefing must be prepared and submitted no later than Thursday, 30 APRIL 2009. All presentations will require a Disclosure Form 712 A/B. Classified submission information and a MORS Presentation Disclosure Form 712 A/B are available online. ™ The finalists will be invited to present their briefing on the Monday prior to the 77th MORSS (15 JUNE 2009, at the Symposium in Ft. Leavenworth, KS). Each finalist will be allotted one (1) hour to present — which includes time for questions and answers. This session will be open to all MORSS attendees. During this session judges will ask questions as appropriate. Following this session, the judges will select the Rist Prize winner and the honorable mention. The Rist Prize winner and honorable mention will be announced at the 77th MORS Symposium plenary session on the morning of Tuesday, 16 JUNE 2009. Rist Prize Criteria To be eligible, an individual or team must submit a presentation that at a minimum meets the following criteria: ™ ™ ™ ™

Be an original and self-contained contribution to systems analysis or operations research that has been implemented. Provide recognizable new insight into the problem or its solution. Have an impact on major decisions. Be used by a client organization and have letter(s) of endorsement from a GO/SES/Executive in the client organization so stating. Client organizations are defined as a Government agency or Laboratories, Industry activity, Academic Institution, or other user of the results of operations research.

Eligible study presentations are judged according to the following criteria: Professional Quality x Problem definition x Citation of related work x Description of approach x Statement of assumptions x Explanation of methodology

x x x x

Contribution to Military Operations Research and Major Decisions x Importance of problem x x Contribution to insight or solution of the problem x x Power of generality of the result

Analysis of data and sources Sensitivity of analyses (where appropriate) Logical development of analysis and conclusions Summary of presentation and results

Originality and innovation Contribution of the study to the decision

INTRODUCTION Important decision support problems faced by the Department of Defense (DoD) require formalized, traceable, and understandable analysis. These complex and detailed analyses may support a variety of decisions including Capabilities-Based Planning (CBP) to identify capability gaps, program analysis to support the development of a Program Objective Memorandum (POM) as part of the budgeting process, systems engineering studies to support design decisions, and analysis of alternatives to support acquisition decisions. Over the years, responding to these needs, a number of structured decisionmaking frameworks and methodologies have been developed. This paper reviews the general steps required to properly analyze a set of alternatives and discusses common pitfalls associated with each step of the process. Since the analyses from capabilities models are commonly used to allocate resources to the nation’s most critical capability gaps, pitfall-laden analyses could produce sub-optimal resource allocations at best and an inability to obtain Service, DoD, or Congressional support at worst. Our intent is to provide practical advice for creating a well-structured decision framework using a Multiple Objective Decision Analysis (MODA) approach (Keeney and Raiffa, 1976 and Kirkwood, 1997) supported by a Value-Focused Thinking (VFT) philosophy (Keeney, 1992). MODA and VFT, in conjunction with justifiable data from subject-matter experts, can provide a defensible and repeatable rationale for critical decisions. As we describe each step of the decision framework in detail, we enumerate many commonly occurring pitfalls in hopes of creating better practice for current and future decision analysts. This paper builds on the previous work identifying pitfalls in decision making (Keeney, 2002) and emphasizes in particular DoD analysis pitfalls observed by the authors over several years. Although there are several excellent textbooks and documented studies using MODA and VFT, poor modeling and analysis practices are still common. MODA and VFT can appear to be simple to understand, but are in fact quite complex, leading the unwary analyst into the very pitfalls iden-

Military Operations Research, V13 N2 2008

tified in this paper. The highlighted pitfalls should be avoided since they can lead to an inadequate problem definition and faulty analysis process, which could contribute to poor and inaccurate recommendations and sub-optimal resource allocations. The pitfalls can also contribute to poorly performed analyses, leading to confusion when the modeling and analysis results do not make sense to the decision maker and stakeholders. In turn, this may lead to the loss of critical support from sponsors and customers, or require re-work to achieve a satisfactory recommendation. This wasted time and effort can have a long-lasting impression on the sponsor who may be more likely to avoid a rigorous decision analysis approach in the future. This paper is organized into four sections. The first section identifies the relevant applications of decision analysis for DoD agencies and organizations. The second section provides a glossary of important decision analysis terms. The third section describes a Multiple Objective Decision Analysis process with Value-Focused Thinking and identifies the common pitfalls for each step. The final section provides a summary of the pitfalls and provides practical recommendations for avoiding each pitfall.

Avoiding Common Pitfalls in Decision Support Frameworks for Department of Defense Analyses Robin L. Dillon-Merrill, Ph.D. Georgetown University [email protected]

Gregory S. Parnell, FS, Ph.D. United States Military Academy gregory. [email protected]

Donald L. Buckshaw

APPLICATIONS OF DECISION ANALYSIS FOR THE DOD MODA has been applied to many DoD problems specifically military applications for the past three decades. Parnell (2007) provides many examples. The examples tend to group into four categories: capabilities-based planning, resource allocation, systems engineering, and analysis of decision alternatives. Capabilities-Based Planning (CBP) represents a shift in the planning framework for the DoD from addressing specific threats and scenarios to preparing future mission capabilities for uncertain scenarios. The process is detailed in Davis (2002) and includes activities such as: • Identifying capability needs • Assessing capability alternatives for effectiveness in missions and operations • Making choices about requirements and the various alternatives to achieve them

Innovative Decisions, Inc. dbuckshaw@ innovativedecisions.com

LTC (ret) William R. Hensley, Jr. The Kenjya Group, Inc. [email protected]

Capt David J. Caswell United States Air Force [email protected]

APPLICATION AREAS: Resource Allocation Analysis of Alternatives, Capability Based Planning, Systems Engineering OR METHODOLOGIES: Decision Analysis and Value-Focused Thinking

Page 19

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

• Considering the entire portfolio of alternatives to address future war-fighting capabilities, force management, and risk trade-offs in an economic framework. A well-structured decision framework using a MODA approach and a VFT philosophy can help accomplish all of these activities. The Program Object Memorandum (POM) provides a comprehensive description of proposed programs with budget justifications to support resource allocation decisions. This includes a time-phased allocation of resources (forces, manpower, and funding) by program projected six years into the future. A separate POM is developed by each DoD component, and senior leaders integrate the component POMs to define a coherent defense program. MODA-VFT provides defensible results which link budget expenditure directly to program value, ensuring DoD organizations are purchasing those systems that provide the most value. Systems Engineering is an interdisciplinary approach for designing, developing, and deploying successful systems. It focuses on defining the customer needs and the required functionality early in the development cycle, documenting these requirements, and then proceeding with design synthesis and system validation while considering the complete problem, including operations, environment, design, development, manufacturing, deployment, cost and schedule, performance, training, maintenance, test, and disposal (Wikipedia, 2006). Decision analysis is often used in systems engineering along side of other operations research techniques such as modeling and simulation, to help understand, design and compare enterprise architectures (Parnell, Driscoll, and Henderson, 2008). Analysis of Decision Alternatives consists of a broad class of decision problems where the analyst needs to compare the value of competing alternatives. Analysis of alternatives is used to evaluate the operational effectiveness, suitability and costs of alternatives to meet mission capability and plays a central role in acquisition decisions in the Joint Capabilities Integration and Development System (CJCSI 3170.01E, 2005).

Page 20

DEFINITIONS The following are the decision analysis definitions that we will use in this paper. Alternatives - The alternatives are the programs, system components, or the courses of action that the decision maker might select or individual items being evaluated. Functions - Characteristic tasks, actions, or activities that must be performed to achieve a desired outcome (INCOSE, 2004). Goal - A goal defines a level or standard with respect to a specific objective (Keeney, 1992). Alternatives will satisfy or not satisfy specific goals. The satisfaction of a goal is preferred but not required like a constraint. The term “goal” is often incorrectly used interchangeably with objective. Multiple Objective Decision Analysis Discipline for evaluating complex alternatives by systematically examining decisions and focusing on multiple, conflicting objectives (Keeney and Raiffa, 1976, Kirkwood, 1997). Objectives (Capability or Program Objectives) - An objective is something that the decision maker wants to achieve. In CBP and POM, these are the future capabilities. Objectives/Value Hierarchy (Capability Hierarchy) - The hierarchy is a pictorial representation of the objectives where the higher levels represent more general objectives and the lower levels explain or describe the higherlevel objectives. In CBP, this is a hierarchical representation of the capabilities. In the POM, this is a hierarchical representation of program objectives. For system engineering studies, the first tier of the hierarchy defines the system functions, and the objectives are on the second tier. Ideally, objectives and value measures have a 1-to-1 relationship in the model. Sometimes more than measure is required. Screening Criteria (Constraint) - Constraints are requirements (e.g., minimum or maximum thresholds specified by the decision maker) that should be used to remove any unacceptable alternatives from consideration. The intent of a constraint is to limit the set of alternatives to only those that are feasible, which helps bound the decision space. Stakeholders - People who will be affected by the decision or can influence it.

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

Value Measures (Evaluation Measures/ Capability Measures) - A value measure is a scale to assess how well an alternative achieves an objective (or capability). Value-Focused Thinking - A philosophy for focusing the decision maker on values and ideal criteria rather than simply selecting criteria that will differentiate among a set of defined alternatives (Keeney, 1992). VFT is used in conjunction with MODA. Value Function - A numerical measure that describes the value of an alternative. The function converts a value measure score to a dimensionless value (Watson and Buede, 1987). Weight - A scaling constant used to capture the decision maker’s value tradeoffs among value measures. Weights depend on importance and the range of variation of the measures.

MULTIPLE OBJECTIVE DECISION ANALYSIS PROCESS USING VALUEFOCUSED THINKING A Multiple Objective Decision Analysis using Value Focused Thinking (MODA-VFT) approach is recommended to structure decision making for many DoD decisions. The original MODA can be dated by the publication of the seminal 1976 book Decisions with Multiple Objectives (Keeney and Raiffa, 1976) while VFT as a decision philosophy can be dated by the publication of Value-Focused Thinking in 1992

Figure 1.

Military Operations Research, V13 N2 2008

(Keeney, 1992). Unfortunately, some MODA studies have encountered several pitfalls which have limited the effectiveness and the efficiency of the approach. We hope that by illuminating these pitfalls, decision analysis practitioners can avoid future mistakes. The MODA-VFT framework should be considered a process for organizing the analysis and planning for communication with decision makers (Bodily and Allen, 1999). Decision analysts work for decision makers and work with stakeholders and subject matter experts. Decision makers provide decision analysts the initial statement of the problem and, usually, the resources to perform the analysis. Stakeholders provide key insights, objectives, and constraints that guide the analysis. Subject matter experts provide the domain knowledge to develop the value model, generate the alternatives, and provide the alternative scores. Figure 1 summarizes these steps graphically. The decision makers, stakeholders, and experts can and should participate in each step. These interactions need to be planned in advance and incorporated into the entire decision process. For example, the output of each step should be reviewed and approved by the decision makers. The arrows in Figure 1 provide the outputs of each step. For example, the output of step 1 is a clear problem definition based on information provided by the decision maker and stakeholders. In general, the steps that we consider for MODA (Kirkwood, 1997) using VFT are:

Steps in the Structured Decision Process

Page 21

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

Step 1: Define the problem and identify the stakeholders. Step 2: Identify the appropriate functions (system engineering problem) and objectives for the problem based on decision maker’s and other stakeholder’s values. Step 3: Develop value measures and value functions for each objective. Step 4: Identify and develop alternatives using VFT. Step 5: Assess swing weights for each measure from relevant decision makers and stakeholders. Step 6: Score alternatives on each of the value measures and assess the uncertainty associated with the scores. Step 7: Analyze the sensitivity of the analysis to assumptions and consider refining alternatives to create better alternatives using identified value gaps. Step 8: Provide recommendations and insights. The next eight sections briefly summarize each step and discuss pitfalls that commonly occur when developing and analyzing alternatives.

Step 1. Define the Problem and Identify Stakeholders This first step is the most critical and potentially poses the greatest challenges. A concise problem definition is necessary to serve as the focal point for the remainder of the analysis. The decision analysts should work with the initial task statement provided by the decision maker and with stakeholders who can contribute insight. The problem statement should clearly state what the analysis should provide in terms of deliverables, and should ensure that the task is solving the correct problem. Often, the analyst will create a problem statement focusing on the analysis method or tool, such as “create a value model” is a warning indicator. The focus of the problem statement should emphasize the correct problem and not the proposed solution technique. Pitfall #1. Lack of Clear Problem Definition. The problem definition must address

Page 22

the right problem. If the problem definition or decision is not clearly articulated, the most likely outcome is the MODA model will not appropriately address the problem. For example, a MODA model developed to evaluate gaps in a CBP gap analysis will be different than a model to evaluate programs in a POM resource allocation analysis. Additionally, for the model to be relevant and applicable, the problem definition should consider higher organizations’ policy, vision, and guidance statements.

The initial problem statement is never complete. A concise problem definition requires the agreement of all decision maker(s) and stakeholders. A second common pitfall is to exclude either the decision makers or key stakeholders in the development of this problem statement and/or to lose the engagement of one or more of these groups during the decision process. Pitfall #2. Not Interacting with and Engaging Decision Makers on a Regular Basis. If the decision makers are not involved in the early phases of the study and in periodic reviews, it is unlikely the recommendations will solve the decision maker’s problem.

Decision maker input is needed to explicitly define the decision space, identify important decisions which have already been made, and identify constraints on the analysis. If regular meetings are not held with decision makers, an unacceptable solution or sufficient rework are very likely outcomes. Pitfall #3. Not Involving Key Stakeholders. It is not possible to adequately define the problem without involving key stakeholders. Moreover, stakeholders who can participate actively in the process generally are more likely to support the outcome. Not involving key stakeholders may result in rejection of the solution by the uninvolved stakeholders or lack of support to implement the decision.

Stakeholders include customers, operators, oversight and peer organizations, and other subject matter experts (SMEs). Customers are the individuals that realize the benefit from the product or service, (e.g., the policy maker or warfighter). The operators of the system (or

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

users) use or provide the product or service, (e.g., pilots, spacecraft operators, etc.). Decision makers within the DoD may need to consider higher headquarters and other organizations such as the Joint Chiefs of Staff, Office of the Director of National Intelligence (ODNI), the Combatant Commands, or other members of the defense and intelligence communities. In the cases of CBP and POM decisions, higher organizations will review major decisions and should be identified as important stakeholders in the process. SME teams may include operators, maintainers, developers, system engineers, system analysts, cost estimators, and risk analysts. In the first step, the decision framework will be planned, and it is critical that appropriate modeling tools are selected. For example, a different model is required if the analysis needs to evaluate a portfolio of alternatives rather than individual alternatives. A recent DoD analysis reviewed by the authors highlights this problem. The study created an alternativesbased model to determine which automated tools would create the most value for an intelligence program. After the model was completed, the customer then asked which set of tools would best serve the community. Problems immediately surfaced because the value measures were not constructed to show the additional benefit for diminishing returns to scale of multiple tools satisfying the same measure. The end result was an ad-hoc analysis based on heuristically determined value judgments and a non-intuitive capability combining equation. The customer could not understand the model and the analyst had no assurance the results were accurate. Pitfall #4. Choosing an Inappropriate Model. The most common type of MODA is an alternatives-based model. This model is used to identify the best alternatives from a set of potential candidates. However, in many CBP and POM analyses, alternatives may need to be scored as portfolios, and treating alternatives individually will not produce a reasonable set of alternatives when evaluated as a group. The model will need to be more complex for a portfolio and will need to be constructed and analyzed differently. Parnell

Military Operations Research, V13 N2 2008

(2007) provides a discussion of the two types of models.

In some cases, a MODA framework assuming certainty is appropriate. However, some problems will require modeling to quantify uncertainty, (e.g., decision trees, event trees or other risk analysis). If required, the use of additional risk modeling tools should be identified in this initial step.

Step 2. Identify the Functions and Objectives The objectives can be developed from both interviews with the senior management responsible for the ultimate decision recommendation and from “gold standard” documents (Parnell, 2007) that provide policy, vision, and guidance (e.g., joint capabilities documents, organization strategic guidance documents, and requirements documents.) In many applications, performing a functional analysis and identifying the functions needed by the system provides an important framework for recognizing and structuring objectives. The objectives should capture all the factors that are important to the decision maker(s) in choosing a preferred alternative. In CBP studies, the decision maker is concerned with meeting future mission needs. Therefore, the study should focus on capability objectives based on the needs such as having the capability to “locate terrorists in underground bunkers”, “destroy theater missile platforms” or “share information with other organizations.” These objectives should consider the decision maker’s ideal outcomes. Focusing on ideal capabilities will often support the development of better alternatives rather than simply focusing on the initial (often obvious) set. When numerous capabilities objectives are relevant to the decision process, these objectives should be structured logically into a simple, understandable hierarchy. The hierarchy is the qualitative representation of the model and will be used to represent and describe the model to an audience who may not understand the technical work behind the model. The qual-

Page 23

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

ity of the hierarchy will influence the quality of the quantitative model. The upper levels in a hierarchy represent more general objectives or system functions, and the lower levels decompose these more general objectives into specific objectives that explain or define the higherlevel objective. Keeney and Raiffa (1976) suggest five criteria that can be used to judge the value hierarchy: completeness, absence of redundancy, operationality, decomposability, and minimum size. • “Completeness” refers to the set of objectives being collectively exhaustive. It should be noted that collectively exhaustive does not mean that all possible capabilities objectives must be included, but that all pertinent objectives to the decision at hand must be included. • “Absence of redundancy” means the objectives are mutually exclusive. • Operationality relates to whether the criteria are specific enough for the decision maker to use to compare and evaluate different alternatives (this will be discussed in more length in the next section on constructing value measures). • Decomposability specifies that an alternative can be judged on one objective independent of its performance on other objectives. • Finally, minimum size is important for implementing the model. If the hierarchy is too large, any meaningful analysis may be impossible. There is an inherent contradiction between completeness and minimum size, and the decision analysts must help the decision makers and stakeholders find the appropriate balance to develop a model that captures all the significant objectives while still being manageable. Often good decisions can be made with a relatively small number of value measures (six to ten). Pitfall #5. Poorly Developed Objectives Hierarchies. The most common problem in structuring hierarchies is lack of logical organization and lack of definitions for the functions (if applicable) and objectives. If the hierarchy is not logically organized, decision makers and stakeholder will not

Page 24

understand it. Lack of clarity in the objectives can lead to the development of a set of objectives that is either redundant or not complete. If the same objective appears in two places in a hierarchy, then alternatives possessing that capability will get doublecredit and those lacking that capability will get double-penalized. Similarly, if a fundamental objective is missing from the hierarchy, that important capability will not be evaluated in the decision making process.

Figure 2 shows two hierarchies. The first hierarchy is not organized in a logical manner or grouped in a way that would help achieve understanding and acceptance. People can more easily remember three to five objectives, especially if they are sequenced in a logical order, e.g., know, move, destroy or move, shoot, communicate. The bottom hierarchy is much easier to understand since the objectives are sequenced in a logical order, with each objective defined by sub-objectives also sequenced in a logical order. This logical organization will be helpful to explain the model to stakeholders and will help avoid overlap of the objectives. If some minimal level of achievement must be met for a particular capability, this should be treated as a constraint. Constraints are different than objectives. These constraints should be used as screening criteria to remove any unacceptable alternatives from consideration. The intent of constraints is to limit alternatives. Pitfall #6. Not Treating Constraints as Screening Criteria. A common pitfall is not explicitly treating constraints as screen-

Figure 2. Example Hierarchies

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

ing criteria but rather trying to incorporate these criteria in the objectives. A common indication of this pitfall is when value measures have only two levels: no and yes. Including constraints in evaluation criteria can lead to inappropriate weights and the incorrect ranking of alternatives.

For some studies, it is appropriate to include cost as a fundamental objective in the value hierarchy. However, we recommend hierarchies developed to support CBP and POM analyses should not include a cost objective. Pitfall #7. Including Cost in the Objectives Hierarchy. The focus of a CBP or POM analysis is the benefits or the capabilities, not the cost of the resources. Some acquisition approaches, e.g., the cost as an independent variable (CAIV) approach, require the separation of cost and value. In these cases, it is inappropriate to include cost in the objectives hierarchy. The benefit-cost comparison is made during a subsequent stage in the decision making process (as shown in Figure 1 in steps 6 and 7).

Step 3. Develop Value Measures and Value Functions Once the set of objectives are identified, value measures are required for evaluating how well alternatives will meet each objective. Value measures should capture how well a particular alternative performs for each objective and will provide a standardized value that can be compared across objectives. For example, the objective “to destroy theater missiles platforms” can be measured in terms of the percent of theater missiles destroyed by a particular system alternative. Sometimes natural scales such as numbers or percents will exist that can serve as value measures (e.g., pounds lift-capacity, rounds fired per minute, or miles per gallon). However, many objectives will require constructed scales for scoring. Ideally, each objective should have only one direct value measure. After identifying the appropriate value measure for each objective, value functions are defined based on input from decision makers and stakeholders to quantify the relationship

Military Operations Research, V13 N2 2008

between achievement of the capability and the value associated with possible levels of achievement. For example, consider a heavy-lift helicopter capability. Assume that a helicopter that cannot lift at least 5,000 pounds is unacceptable. Figure 3 shows an example value function for the heavy-lift helicopter capability where anything less than 5,000 pounds is not a feasible solution. A helicopter that can lift 5,000 pounds (screening criteria) is feasible but has 0 value. Value increases rapidly from 5,000 to 12,000 pounds. Above 12,000 lbs., there is some additional value for the ability to carry additional payload. Assessing a value measure is a challenging task and many pitfalls are common in this step. Keeney (2002) specifically focuses on mistakes made in value assessment and this discussion extends many of his observations with additional pitfalls commonly seen in decision analysis to support DoD applications. Pitfall #8. Improper Use of Probabilities as Value Measures. Some studies incorrectly use probabilities as measures instead of chance nodes in a decision tree. We separate the probability from the value using both decision trees and MODA and assign a value of 0 for outcomes with no value (Parnell, 2007). For example, consider a potential value measure that is the probability of successful launch of an intelligence satellite. The correct technical approach is to make launch success a chance node in the decision tree since the value would be 0 if we had a launch failure.

Figure 3. S-shaped value function with Minimum Screening Criteria

Page 25

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

Often, an organization will try to create value measures out of easily collected metrics. Usually these metrics are numeric, historic, and available, but not always pertinent to measuring future value. For example, a common practice is to judge the worth of an on-going program by measuring how close the program is meeting its planned resource expenditure plan (burn plan). It is easier to gather burn-rate data for a program then to construct a meaningful scale to judge program management. Another example would be a military psychological operations unit that measures success by counting the number of propaganda leaflets dropped. A direct measure would be potential effect of the pamphlets on the targeted population. Often with these proxy measures, analysts will use several separate but interrelated value measures instead of using one aggregated scale. Pitfall #9. Using multiple proxy measures. A common pitfall is to rely on multiple proxy measures to capture various aspects of the objectives rather than constructing one direct value measure scale that captures all aspects simultaneously.

An aggregated measure is simpler to interpret and will make the goals of completeness and absence of redundancy easier to achieve. An example of an aggregated value measure is Probability of Kill of a weapon system against a specific target. This aggregated measure is simpler than including measures for several engineering properties of the weapon and the sensor systems, such as Probability of Detection, Probability of a Hit and Probability of Destruction. Modeling and simulation could (and should) be required to assess an alternative’s capability on an aggregated measure. A second useful approach is to use two dimensional measures (Ewing, Tarantino, and Parnell, 2006) to model dependent value measures.

Step 4. Generate alternatives The search for alternatives should be a creative process focusing on the values guiding the decision. Analysts must ensure they develop alternatives better than the initial alternatives. In a CBP analysis, the fundamental purpose is to identify capability gaps. Analysts

Page 26

need to help SMEs focus their attention on the capability gaps and encourage them to develop better alternatives. This is why VFT is recommended for CBP. However, once the data have been collected and the model run, there is often a desire to finish the work instead of searching for better alternatives. Often, this is driven by a tight schedule that does not permit adequate time for analysis. Time must be included in the schedule for creating and evaluating new alternatives based on the insights gained from the MODA model. Pitfall #10. Not Using Value-Focused Thinking to Develop Better Alternatives. The most significant pitfall at this step is to not use Value-Focused Thinking. Analysts should constantly look for better alternatives since the purpose of the exercise is to focus creative thinking on identifying future opportunities to create value. If you are not generating better alternatives you are doing MODA and not VFT.

Throughout the alternative generation process, it is incumbent on the analyst to ensure the value measures defined in the previous step are relevant to distinguishing the most preferred alternatives from the lesser ones. It must be possible to evaluate the impact of the alternatives on the value measures and most importantly the model must discriminate among the alternatives. From Ulvila (2006, p. 1), “A value model that is a perfect representation of the organization’s goals and objectives but is incapable of discriminating among programs is useless for the budgeting decision.” Pitfall # 11. Value Measures That Do Not Discriminate Between the Alternative Solutions. The purpose of the value measures is to identify the ideal solution and discriminate between alternatives. Upon completion of the value measures, a check is required to ensure the model constructed is appropriate for evaluating the alternatives under consideration. In our experience, this pitfall logically follows Pitfall #1.

Step 5. Assess Swing Weights Weights are required to trade off the objectives since not all value measures will be (nor

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

should they be) equally weighted in the final model. Unfortunately, several errors are common in assessing weights to make trade-offs. The first is not assessing weights from the decision maker. This is a continuation of Pitfall #2 (Not Interacting with Decision Makers on a Regular Basis). Alternative sets of weights can be assessed from other stakeholders, but it is essential to have a set of weights that represent the decision makers’ values. These alternative sets of weights can be used as part of the sensitivity analysis. Another common problem is not understanding what the measures represent, which is critical before attempting to evaluate trade-offs. This is a continuation of Pitfall #5 (Poorly Developed Objectives Hierarchies). For example, a capability objective could be to “Destroy Target.” While this should be easy to understand, does degrading the target without completely destroying it provide any value? These types of questions need to be resolved before assessing weights. Regrettably, an intuitively appealing way to attach weights to value measures is based on their perceived importance to the decision maker. The first challenge is that “importance” is usually poorly defined. Additionally, importance weights do not take into account the range between the lowest and highest levels of the value measures. The result is that if alternatives score very similarly on a particular measure (i.e., the range between best and worst is small) then that measure is unlikely to be important in this decision even if the decision maker considers it to be an important measure in the abstract. Pitfall # 12. Not Using Swing Weights. Assessing trade-offs using swing weights can resolve this problem. Swing weights (von Winterfeldt and Edwards, 1986) are derived by asking the decision maker to compare a change (or swing) from the least-preferred to the most preferred score on one measure to a similar change in another measure. This process emphasizes the range for which the value measure is defined.

Two good modeling techniques to obtain swing weights are the swing weight matrix (Ewing, Tarantino, and Parnell, 2006) and the

Military Operations Research, V13 N2 2008

Balance Beam (Watson and Buede, 1987). The swing weight matrix defines both importance and variation dimensions for the decision problem. Even when using swing weights, too often the approach is to distribute 100 points among the objectives. This method does not emphasize the importance and variation dimensions that are critical to assess correct weights. The simplest model for incorporating weights is the additive function where scores on each value function are weighted by the appropriate trade-off value:

冘 n

v共 x兲 ⫽

i⫽1

冘w ⫽ 1 n

w iv i共 x i兲 ,

where

i

(1)

i⫽1

Many decision models ignore value dependencies in the objective hierarchy. In order to use this appealingly simple additive model, the hierarchy must show mutual preferential independence (Kirkwood, 1997). This case exists, if given a fixed level of achievement on one measure, the values of the other measures can still be assessed independent of the fixed measure. Pitfall #13. Using an Additive Model When Measures are Not Preferentially Independent. Many people (including analysts) do not understand this issue, do not test the model for mutual preferential independence, and do not know how to fix dependency issues once discovered. If dependency exists, multidimensional scales can be created that combine the two dependent measures into one measure (Ewing, Tarantino, and Parnell, 2006). This is often seen when objectives include both quality and quantity. Pitfall #14. Confusing Probabilistic and Preferential Independence. In considering independence assumptions for the value measures, many people (including some analysts) confuse preferential and probabilistic independence.

Value measures should be preferentially independent (Kirkwood, 1997) even if they are probabilistically dependent. For example, consider the uncertainties in a project: cost, schedule, and performance. If technical problems occur, the schedule will probably slip and the cost

Page 27

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

(to find solutions) will probably increase. If a schedule slip occurs, the cost will probably increase. These are probabilistically dependent. However, the decision maker’s value depends on performance, schedule, and cost. If at various levels of performance, the decision maker’s time and cost preferences are the same, we say performance, schedule, and cost are preferentially independent.

Step 6. Score Alternatives Alternatives are evaluated by scoring them against the individual value measures, converting the scores to standardized values using value functions, and combining the standardized values into an overall value (Equation 1). One pitfall is that sometimes it may be difficult to score alternatives on some of the measures; therefore, the model requires some revision during the scoring elicitation session. Many of these problems can be identified if the modeling team conducts a practice scoring session in advance. Pitfall #15. Poor Scoring Practices. Major problems can occur if the alternatives are not described in sufficient detail for different people to score the alternatives consistently. For example, too often alternatives will be scored on the title rather than on the details of the program, capability, or system. This can lead to inconsistent scores and unstated assumptions about the alternatives. Even if the alternatives are described in sufficient detail, there can be scoring inconsistencies based on the scoring group’s areas of expertise. If significantly different groups score alternatives, then scoring may be inconsistent across alternatives (e.g., more pessimistic, more optimistic, etc.). SMEs should be involved in this step and should be educated to biases such as anchoring on (possibly irrelevant) information and failing to adjust from the anchor (Tversky and Kahneman, 1974). It is also prudent to use a scoring validation group that adjudicates inconsistencies and approves the scores.

Page 28

Step 7: Conduct Analysis, Examine Sensitivity, and Re-Apply ValueFocused Thinking A sensitivity analysis should determine if the preferred recommendation is sensitive to small changes in one or more aspects of the model. If the model is determined to be highly sensitive to a particular weight or assumption, the decision maker may wish to reconsider carefully that aspect of the model. Also, the results of the sensitivity analysis need to be presented to the decision-maker in terms they understand and decisions they can make given different assumptions or scenarios. VFT must be reapplied with the insights learned in the analysis. Pitfall #16. No or Inappropriate Sensitivity Analysis. Sometimes little or no sensitivity analysis is performed. At a minimum, sensitivity to weights at the top of the hierarchy should be performed. The standard approach is to vary one variable at a time keeping all the other weights in the same proportion. However, it may be more appropriate to do two-way or threeway weight sensitivity or other approaches.

Step 8. Provide Recommendations and Insights Of equal weight to the importance of formulating the problem correctly in Step 1, is focusing on the analysis and carefully preparing, presenting, and documenting the recommendations. As part of the initial planning process, analysts should consider how the results will be used, what other analyses might need to be integrated, and how will the process be documented. Models, graphs, and charts that are vital for the analyst to understand the data may be useless for helping the decision maker understand the insights of the analysis and the justification of the recommendation. Pitfall #17. Poorly Communicating the Recommendations. Too often people assume the process is complete once the

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

Figure 4.

Frequency, Severity, and Impact of Pitfalls

math step is completed in Step 7. However, the results must be presented in a clear and understandable way to decision makers. Unfortunately, there is often a desire for many analysts to want to show the decision-maker the mathematical details that allowed them to arrive at their solution. Time and budget must be available at the end of the process to involve the decision maker and stakeholders to examine and explain the results, and this step must be planned for before the analysis begins.

SUMMARY Clearly, one or a combination of many of these cited common pitfalls will derail any MODA-VFT analysis, despite the best intentions of the analyst team. The frequency of occurrence and the severity of the outcome may vary by pitfall. Figure 4 displays where the pitfalls generally occur in the sequence of steps, the frequency of occurrence categorized by very common, common, and occasional, the severity (the most severe are denoted by the black icon), and the impact to the process denoted by the length of the ar-

Military Operations Research, V13 N2 2008

row. The numbers correspond to the pitfalls as described in the previous section and as listed in Table 1. These pitfalls can be avoided, and it is imperative that the analyst team guide the decision makers and customers through the potential hazards caused by these common pitfalls. We created Table 1 below to provide some best practices to avoid each pitfall. In conclusion, the MODA-VFT process can be a valuable tool, if performed correctly. MODA-VFT provides a framework to engage and involve multiple decision makers and all stakeholders. It is a logical process, which most people find easy to understand. It includes useable procedures to implement the logic through each step of the process. It applies to all resource allocation decisions involving multiple objectives. It provides a traceable and defensible analysis, and is a repeatable process for the organization and decision maker. This paper highlights some of the most common and more damaging pitfalls associated with a poorly executed MODA-VFT analysis and provides some brief recommendations to both novice and experienced decision analysis practitioners on how to avoid these pitfalls on studies conducted for the DoD.

Page 29

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

Table 1. Number

Best Practices to Avoid Pitfalls Potential Pitfall

#1

Lack of clear problem definition

#2

Not interacting with and engaging senior leaders on a regular basis Not involving key stakeholders

#3

#4 #5

Creating an alternatives-based model when a portfolio model is needed Poorly developed objectives hierarchies

#6

Not treating constraints as screening criteria

#7 #8

Including cost in the objectives hierarchy Improper use of probabilities as value measures

#9

Using multiple proxy measures

#10

Not using value focused thinking to develop better alternatives Value measures that do not discriminate between alternative solutions Not using swing weights

#11 #12 #13

Using an additive model when measures are not preferentially independent

#14 #15

Confusing probabilistic dependence with preferential independence Poor scoring practices

#16

No or inappropriate sensitivity analysis

#17

Poorly communicating the recommendations

ACKNOWLEDGEMENTS We would like to thank Terry Bresnick, Dennis Buede, and Jacob Ulvila of Innovative Decisions Inc. for their helpful comments on earlier drafts. We would like to thank the reviewers for their suggestions including the

Page 30

Best Practices to Avoid Pitfall Structure an initial study activity for problem definition that involves all stakeholders (Parnell, Driscoll and Henderson, 2008). Schedule regular In Process Reviews (IPRs) or event driven IPRs with senior decision makers. Identify stakeholders early and arrange for participation at important stages (e.g., defining problem, constructing hierarchy, scoring alternatives). Carefully review problem definition and identify if portfolio model is appropriate early in the process. Focus on clearly structuring levels and defining all objectives. Use 3–5 logically ordered objectives on the top layer. Identify constraints early and keep separate from objectives. Identify early if inclusion of cost is appropriate. Use a decision tree to model probabilities that have outcomes with 0 value. Avoid reliance on easy to measure proxies and focus on one direct measure for each objective. Continually search for better alternatives. Focus on measures that will discriminate among alternatives; review model output to validate. Structure weight assessment using swing weight matrix or balance beam techniques. Understand and review independence assumptions carefully. Use multi-dimensional value measures to model dependencies when appropriate. Review and document independence assumptions carefully. Involve subject matter experts (SMEs) and consider having a scoring review group. Perform sensitivity analysis and assess the implications on your recommendation and further analysis. Plan time in the study for preparing and presenting recommendations.

suggestion to add the sensitivity analysis pitfall.

DISCLAIMER The views expressed in this paper are those of the authors and do not reflect the official

Military Operations Research, V13 N2 2008

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS . . .

policy or position of the United States Government or the Department of Defense.

REFERENCES Bodily, Samuel and Allen, Michael. (1999) “A Dialog Process for Choosing Value-Creating Strategies,” Interfaces 29(6): 16 –28. CJCSI 3170.01E (2005) Joint Capabilities Integration and Development System, Department of Defense, 11 May 2005. Davis, Paul K. (2002) Analytic Architecture for Capabilities-Based Planning, MissionSystem Analysis, and Transformation, RAND, MR-1513-OSD. Ewing, P.L., Tarantino, W., and Parnell, G.S. (2006) “Use of Decision Analysis in the Army Base Realignment and Closure (BRAC) 2005 Military Value Analysis,” Decision Analysis, 3(1): 33– 49. International Committee on Systems Engineering (INCOSE-TP-2003-016-02) INCOSE SE Handbook, Version 2a, June 2004 Keeney, Ralph. (1992) Value-Focused Thinking, Harvard University Press: Cambridge, MA. Keeney, Ralph. (2002) “Common Mistakes in Making Value Trade-offs,” Operations Research, 50(6): 935–945. Keeney, Ralph and Raiffa, Howard. (1976) Decisions with Multiple Objectives: Preferences and Tradeoffs, Cambridge University Press: New York. Kirkwood, Craig W. (1997) Strategic Decision Making: Multi-objective Decision Analysis with Spreadsheets, Duxbury Press: Belmont, CA.

Military Operations Research, V13 N2 2008

Parnell, G., Metzger, R., Merrick, J., and Eilers, R. (2001) “Multiobjective Decision Analysis of Theater Missile Defense Architectures,” Systems Engineering, 4(1): 24 –34. Parnell, G. S. (2007) “Chapter 19, ValueFocused Thinking”, Methods for Conducting Military Operational Analysis. Military Operations Research Society, Editors Andrew Loerch and Larry Rainey, pp 619 – 655. Parnell, G. S., Driscoll, P. J., and Henderson D. L., (2008) Editors, Decision Making for Systems Engineering and Management, Wiley & Sons Inc. Tversky, Amos and Kahneman, Daniel. (1974) “Judgment Under Uncertainty: Heuristics and Biases,” Science, 185: 1124 –1131. Ulvila, Jacob. (2006) “Pitfalls to using a value model for POM budgeting (and ways to avoid them),” Innovative Decisions, Inc. Working Paper, February 23, 2006. von Winterfeldt, Detlof and Edwards, W. (1986) Decision Analysis and Behavioral Research, Cambridge University Press: New York. Watson, Stephen and Buede, Dennis. (1987) Decision Synthesis: The Principles and Practice of Decision Analysis, Cambridge University Press: New York. Wikipedia (2006) for definition of Systems Engineering, http://en.wikipedia.org/wiki/ Systems_engineering, accessed May 11, 2006.

Page 31

Society Benefi ts M I L I T A R Y

O P E R A T I O N S

R E S E A R C H

Since its incorporation as a professional society in 1966, MORS has provided timely and valuable support to analysts and decision makers in the defense community. The Society has a solid foundation of 40 years of service to our profession and we have hit the ground running on our 43rd year. We have incredible opportunities to make the Society even better. This year’s theme, Leading the National Security Analytical Community, provides the focus for the upcoming year.

S O C I E T Y

MORS Membership Dues 1 year - $75.00 2 years - $140.00 3 years - $210.00

MORS offers a place for analysts to: Engage in professional dialogue and peer review. Educate members on new techniques and approaches to analysis. Assist in the accession and development of career analysts. Preserve the heritage of military operations research.

Member Benefits NEW!! Full Time Student Membership Fees are $25 per year!* *Must be renewed annually

As a MORS member you receive the following: A subscription to the MORS quarterly bulletin PHALANX A MORS Lapel Pin MORS Membership Card Reduced Meeting Fees Save $405.00 a year on meeting fees as a GOVERNMENT MORS member!* Save $570.00 a year on meeting fees as a NON-GOVERNMENT MORS member!* * The savings above were calculated assuming participation at each of this years meetings, including five MORS special meetings and the annual Symposium! Attendance at all MORS events will maximize your dues investment!

MORS 1703 N. Beauregard St, STE 450 Alexandria, VA 22311 703-933-9070 FAX: 703-933-9066 MORS Office: [email protected] MORS Executive VP: Krista Paternostro [email protected] MORS Administrator: Cynthia Kee [email protected] MORS Meeting Planner: Colette Burgess [email protected] MORS Communications Manager: Corrina Ross-Witkowski [email protected] Administrative Assistant: Tiffanie Lampasona

ABSTRACT

M

ultiple Objective Decision Analysis (MODA) or Multi Criteria Decision Analysis (MCDA) provides a means to support decisions involving conflicting objectives, complex alternatives and major uncertainties. The United Kingdom Ministry of Defence (MoD) Equipment Capability Customer (ECC) has used Decision Conferencing based on the MCDA Equity software, a commercially available off the shelf package, for over five years to prioritise proposed changes to the Equipment Plan (EP), in the light of prevailing assumptions about the security environment and financial conditions. The paper describes the context of the application of Value Focused Thinking (VFT) within MoD. The development of the MCDA process is explained and a review of the social and technical aspects of the introduction and application of the methodology from 2001 to 2006 detailed. The paper ends with summaries of the challenges to methodology and application, lessons indicated and conclusions on this example of VFT.

INTRODUCTION Multiple Objective Decision Analysis (MODA) or Multi Criteria Decision Analysis (MCDA) provides means to support decisions involving conflicting objectives, complex alternatives and major uncertainties. It offers a complement to the classical forms of analysis by capturing expert judgements on operations, rather than attempting to model the full complexity of the operations themselves. Such judgemental analysis is best used as a complement to quantitative analysis techniques rather than as a substitute for them. Judgemental methods allow the extension of otherwise constrained studies across fragmented organisations. Quantifying the values and evaluation of the alternatives is a means to bring together diverse stakeholders in order to determine a way ahead. Philips and Bana e Costa (2005) describe the original Decision Conference as emerging from discussions at Decisions and Designs Incorporated of new factory designs for the Westinghouse Elevator

Military Operations Research, V13 N2 2008

Company in May 1979. There are now many styles and a range of tools to support the gatherings of key decision makers in Decision Conferences. These discussions are characterised as Decision Conferences by their being facilitated by an impartial process consultant using a model as the focus and structure of the discussions. The United Kingdom provides an example of Value Focused Thinking (VFT) MCDA of unusually broad scope. The Ministry of Defence (MoD) Equipment Capability Customer (ECC) has used Decision Conferencing based on the MCDA Equity software, a commercially available off the shelf package, for over five years to prioritise proposed changes to the Equipment Plan (EP) and in future it will be applied to the more comprehensive Defence Plan (DP). The MoD procures equipment for the three armed services via the EP. A key step in the planning process is to decide the relative priorities of proposed changes to the programme of equipment being procured in the light of prevailing assumptions about the security environment and financial conditions. The process is endorsed for use within wider UK government. It has been applied in a number of areas and formal guidance is provided in the Multi-Criteria Manual (DETR 2000) and the UK published Treasury Green Book (HM Treasury 2002). The next section describes the context of its application within MoD. The development of the MCDA methodology follows this, with subsequent sections reviewing the social and technical aspects of development and application. The paper ends with summaries of the challenges, lessons indicated and conclusions on this example of VFT.

Bringing Value Focused Thinking to Bear on Equipment Procurement Jennifer Austin Defence Science and Technology Laboratory [email protected]

Ian M Mitchell Defence Science and Technology Laboratory [email protected]

THE CONTEXT FOR THE GENESIS OF A VFT APPROACH In 1999, the Smart Procurement Initiative led to the reorganisation of the MoD Operational Requirements staffs and Programme staffs in a single organisation. The new MoD Equipment Capability Customer (ECC) was responsible both for setting the requirements for military equipment and for managing an affordable programme. The ECC currently has a Joint Capabilities Board (JCB) in charge of 11 Directorates of

APPLICATION AREA: Decision Analysis OR METHODOLOGIES: Multiobjective Optimization and Decision Analysis

Page 33

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

Equipment Capability (DECs). The DEC includes military, scientific and administrative personnel, including Operations Research (Operational Research, OR also known as Operational Analysis, OA) specialists. The DECs are each responsible for a part of the EP comprising equipment “lines”, from the accounting lines, for a range of equipment projects, such as radios, armoured fighting vehicles, ships, aircraft or weapon systems. The Key Output for the DECs has been the EP. The EP has a 30 year planning horizon, with the first 10 years programmed in detail. During each planning round DECs are required to offer up a proportion of their programs as potential savings, and they are invited to propose enhancements, often additions to the plan with the aim of creating a balanced and affordable programme. Systems Engineering has been the foundation for the approach to the acquisition of equipment capability. The ECC is founded on Capabilities-Based Planning rather than just buying equipment. The Defence Acquisition Change Programme (DACP) (an internal UK MoD study reporting in 2006) reinforces these themes, particularly involvement of the other Defence Lines of Development (DLOD) which complement Equipment. These DLODs include Concepts and Doctrine, Training, Personnel, Organisation, Information, Infrastructure, and Logistics. The original aims of the changes to defence acquisition remain, namely that it be faster, better and cheaper.

Page 34

noted at the ministerial level. The total budget exposed to the process was of the order of £100 billion over a 10 year period. The DECs consider their programs in detail with the stakeholders in their respective areas of capability. The JCB decide which capability shortfalls have the highest priority for investment, offsetting these against the least damaging savings to make the programme affordable and to provide the necessary headroom. A means to develop a joint perspective, with an audit trail, is essential to provide a rational approach. When the ECC was first set up, the approach of developing a prioritised list of changes to equipment funding lines based on options for saving and options for enhancement was well established. However, the key decision process relied heavily on subjective judgement and was opaque to the wider stakeholder community. There was a recognised need for a more understandable and transparent decision process so that the rationale for the order of priority was amenable to examination and exploration. Decision Conferencing was turned to as a better means to balance investment in equipment capability. This required traceable evidence and priorities to reference for the Budgetary Planning and Programming by the ECC. A possible solution emerged in 2000 during discussions of the Future Surface Combatant (FSC) user requirements. The advantage of the process to clarify options and explore alternatives became apparent. The FSC work acted as a pilot for the method.

THE POTENTIAL FOR VALUE FOCUSED THINKING

DECISION CONFERENCING

There is never enough money to procure every piece of equipment that every DEC could aspire to, so there is always a need to prioritise which equipment should be acquired and which should be reduced, deleted or deferred. Equity-based Decision Conferences are key parts of the process, determining the priorities within the DECs and across the ECC as a whole from the point of view of the JCB. The impact of the work was high both within the ECC with great improvements to its internal communications. The utility of the work was

Decision Conferencing seeks to develop a shared understanding of a problem, generate a sense of common purpose amongst the stakeholders and instil a commitment to the way forward. It achieves these aims through a sociotechnical process. The social aspect is a professionally facilitated working meeting, attended by key stakeholders. The technical part relies on the Equity 3 software package to generate a model of the decision problem comprising options scored within their areas against weighted criteria. The

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

success of the socio-technical approach relies as much on the facilitation as the modelling. The use of Equity required a coherent approach across three levels of decision making. The foundation was the detail of each DEC’s domain. This was debated with stakeholders from the front line commands and from other parts of the MoD. The DECs then presented a summary of their Decision Conferences to their peers at the cross-DEC conference, chaired by the Director of the Equipment Plan. The result of this conference was then presented to the Joint Capabilities Board, who reviewed the pan-ECC priority order in the light of a number of political and operational factors. Their decisions were then presented to a senior MoD committee, the Policy and Programmes Steering Group for implementation.

EQUITY - THE TOOL OF CHOICE

2001 - THE INTRODUCTION AND GROWTH OF DECISION CONFERENCING IN THE MINISTRY OF DEFENCE

• ’Similar’ options are grouped together on a capability basis in areas called ’towers’, • Benefit criteria are selected against which the value of options can be measured, • Within each investment area/tower, options are given subjective scores reflecting their worth, (relative to the other options in the tower against a criterion), • Towers are then weighted against each other within a criterion, • Finally, a relative weight for each criterion is established.

The first step was the building of capacity to conduct Decision Conferencing. Twelve volunteers from the ECC and the then Defence Evaluation Research Agency (DERA) including military officers and civilians were trained over 12 days from March to May 2001 at the London School of Economics (LSE). The immediate requirement was to develop the technical knowledge and skills to run the Decision Conferencing process. There was also a social infrastructure which naturally developed between the group of 12 students. To have a viable capacity required the leads to have knowledge of MCDA, the representation of preference values in terms of scores and weights. They also needed skills to design, build and analyse Equity models. The training introduced Equity and other decision support software and methods. In addition to tuition, discussion, and exercises, the course used the introduction of decision conferencing to the ECC as an ongoing exercise. This initial training was broader than that of the subsequent years giving a range of tools for possible use in the ECC.

Military Operations Research, V13 N2 2008

Equity 3 software is a PC based MCDA tool, developed by the LSE, and now available through Catalyze Ltd in the UK. It has developed from earlier versions of Equity 2 that were originally used in the MoD; Equity 3 has a Windows based graphical user interface. All versions of Equity assist individual decision makers and organisations in obtaining better value-for-money, when allocating limited resources and budgets. Equity produces a list called the Order of Priority. This list shows the order in which projects should be funded to maximise the benefit for the level of expenditure made. This benefit-to-cost ratio is also used to produce graphic displays of potential portfolios. The following steps are involved in running a Decision Conference using Equity: For each model:

This allows the weighted scores for an option to be combined in a measure of its overall benefit to defence; all options are then comparable. Two-person teams of an analyst from the DEC staff and a facilitator, from outside the DEC, conducted the Decision Conferences. The analyst’s first task was to create a model in Equity 3. Figure 1 illustrates a generic example for a notional meeting looking at readiness and training. The view projected is known as the “cityscape”, as the areas resemble office and apartment blocks in an urban skyline. This example has three areas comprising differing capability areas. Equity structures the equipments as options or levels within investment areas. There are two types of areas that are generally used:

Page 35

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

Figure 1.

Cityscape view of areas and levels (Equity screen shot)

• Cumulative where one, many or all of the options could be included or • Exclusive where one and only one of the options can be chosen from the area. Cumulative areas lend themselves to the EP’s many equipment lines grouped within the areas controlled by DECs, but occasionally there is a requirement to include an exclusive area. Equity 3 allows multiple criteria to be defined for both costs and benefits. In the ECC, criteria are mandated centrally to allow consistency sufficient for valid transfer between the multiple DEC models and the single JCB level model. The criteria settled on represent the value of the equipment towards operational success in three types of military endeavour.

Figure 2:

Page 36

To populate the model with data, matrices are used as shown by Figure 2. Two types of cost data are shown: “Benefit”, to allow comparisons of military capability independent of the money involved and a representation of annualised cost Total Annualised Cost (“TAC”). The three benefit criteria assess the value of the options across a spectrum of conflict. The Benefits are all couched in similar terms. These are the value contributed by the option to the level of military capability required for campaign success in three types of operation. These are Deliberate Intervention Operations (DIO), Other Warfighting Operations (OWO) and Peacetime Activities (PA). Implicit in these definitions was the integration of value over time, including the duration of the

The input data for an area (Equity screen shot)

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

benefit as well as the delay until it would become available. All criteria were used by all DECs, providing coherency in the approach across the ECC. Scoring value, which incorporates subjective viewpoints, can be challenging to many participants who are more accustomed to reviewing hard metrics of effectiveness and performance. To assist the elicitation of scores, Equity 3 has graphical displays as shown in Figure 3. The histogram provides a visual representation of the scores and aids discussion of the relationship between the individual options. In this instance Option 3 and Option 4 together provide more benefit than Option 2 on its own (90⫹20⬎100). This simple arithmetic, “Balance Beam” approach, taught by the LSE, is one method of validating the elicited scores. The facilitator has to constantly probe to ensure consistent scoring. Equity 3 displays assist by directly re-scaling other options so that the full effect of scores is more visible. The histograms show relative value of options onscreen. With the earlier versions of Equity the facilitators often would indicate relative value with their hands or simple line displays on a whiteboard. The ability to call up the definitions of criteria and options in dialogue windows focuses the arguments. The social aspect of decision conferencing is closely linked to focused argument, which the analyst can record on-line, summarising the points of an argument. Seeing their comments transcribed on screen can help those with strong views understand that these

have been noted, allowing discussion to move on. Otherwise notes are recorded by someone specifically assigned to act as a recorder for the meeting, capturing decisions and discussions to be published later as meeting minutes. The weights place the scores in all the areas on a common scale. This allows fair comparison in terms of cost-benefit for all the options. Equity uses a double weighting system. The first expressing the weights associated with the benefit criteria (Within Criteria Weights) and the other the relative weights on a given criterion across the areas (Across Criteria Weighting). With the costs, scores and weights input, the model is ready to be sorted. The operation of the mathematical model within Equity uses a sequence of steps (Phillips 2004). Whilst the table looks similar to the previous figure, the data are now normalized input data, where previously they had been incremental data. Equity converts input scores to weighted preference values Vijk

V ijk ⫽ 10

w j w jk v ijk w j w jk

冘冘 j

k

where wjk is the within criteria weight - the weight of j on area k, and vijk is the input score for option i on criterion j in area k. Next Equity calculates the total preference value, vik for each level in each area (the sum across the criteria of the values Vijk for a given level in a given area).

V ik ⫽

冘V

ijk

j

Finally Equity calculates the benefit to cost ratios by dividing the differences in Vik values from one level to the next by cost difference.

r ik ⫽

Figure 3: Histogram display; an alternative view of the input data (Equity screen shot)

Military Operations Research, V13 N2 2008

V ik ⫺ V 共 i⫺1 兲 k C ik ⫺ C 共 i⫺1 兲 k

for i ⬎ 1

Sorting the options in a model is the main processing conducted by Equity 3. It is extremely fast running. This is essential to allow rapid what-if analysis, which underpins the ability to explore implications of contentious scores and weights.

Page 37

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

Figure 4: Weights Dialogue (Equity screen shot)

Each area can then be reviewed to see the weighted preference values set against cost as shown in Figure 6. This is a review point where the facilitator will look for the “furrowed brows” around the room suggesting that the result is not accepted by some participants. Such reactions suggest areas for further discussion. It also suggests those points of dispute which are likely to feature in the discussion of the overall program. Equity acts as a mirror for the views of the participants. It does not prove or even imply that these views are correct. The area diagram in Figure 6 illustrates the characteristic convex slope of options sorted into descending benefit-cost ratio order. The term “convex” has caused confusion in that what is convex viewed from one side is concave

from the other. Elevation views of the contours of terrain derived from a map refer to a slope with a rapid increase in height then levelling off to plateau as “convex”. This has become the usual way to describe the gradient of the triangle formed by the benefit and cost vectors for each option, which shows the benefit-to-cost ratio. Equity shows the triangles formed by costs and benefits, which assists the participants to understand the results. This is a good point to establish that the scores do reflect their true views of the relative merits of the options in an area. Having reviewed the weighted preference values in each area to ensure that they represent the views of the conference, the potential portfolios can be viewed. The shaded area of

Figure 5: Mathematics within Equity 3 (Equity screen shot)

Page 38

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

Figure 6: Area Diagram

the envelope chart (Figure 7) shows all possible combinations of the options. The top edge of this area comprises the most benefit-cost advantageous options in descending order. This is the frontier and shows the order in which to prioritise the options in order to maximize the benefit obtained for the cost incurred. The envelope of possible options has a characteristic shape, similar to that of an American Football, or English Rugby football. Equity shows the triangles of benefit (Y-axis) against cost (X-axis) of options on the frontier that assists the participants to understand the display. In Figure 7 the X and Y vectors of one of the triangles have been emphasised for clarity. The frontier is formed by the sequence of these triangles. When combined with the cityscape, Equity allows a “walk” up the frontier so that participants can see each option’s place in turn. Seeing what options lie within and what are beyond a given level of funding generates further discussion and debate. Sensitivity analysis allows the group to see the implications on the overall result of individual changes to scores and weights argued for earlier. This interaction is the key to buy-in, especially by

Figure 7:

Envelope Graph

Military Operations Research, V13 N2 2008

those whose favourite options have fared badly in terms of benefit-cost. The resulting priorities are available as an order of priority listing showing all options sorted into benefit-to-cost ratio. This list is the tangible output.

INITIAL EFFORTS - DEVELOPMENT OF TECHNICAL CAPACITY As 2001 was the first occasion on which Decision Conferencing had been applied to the EP, the whole of the technical process was also developmental. During the summer applying the skills learned in training to the EP problem began in the various DECs. Equity addressed the EP problem of a selection of equipment lines better than Hiview so Equity models were used throughout. During the summer discussions took place on the application of the criteria by those trained at LSE. Directorate of the Equipment Plan (DEP) provided guidance on the criteria to be used. Attending other decision conferences assisted the analysts in preparing for their respective conferences. There was much parallel development of the individual models. These early conferences used five benefit criteria: Three were to assess the value to operational performance of the option and the other two scored the confidence or risk of the option and cost of ownership. The last was an attempt to capture the impact on the associated running costs for the option. These latter two have since been discontinued as benefit criteria. Although the use of the Decision Conferencing was optional 9 of the 11 DECs adopted

Page 39

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

it. Some DECs wished to consider only the options that were being proposed as part of the rebalancing of the programme, whilst others looked at their complete capability area in order to see if redirecting their main efforts might offer greater value.

Administration Having a suitable venue for the conference was essential. The LSE training took place in the Pod, a special facility for Decision Conferencing, with a large round table and built in projection equipment, whiteboards and good air conditioning. In July 2001 the ECC moved into temporary accommodation whilst the MoD Main Building was refurbished; it returned three years later in 2004. During the temporary accommodation period there was a shortage of suitable facilities for Decision Conferencing. Projection facilities and flexible layouts were the two key requirements for the room, with some means to control the environment. The rooms available to the ECC became overheated and required improvisation for projectors and laptops. Some of the early conferences were crammed into excessively hot rooms, which did not facilitate engagement by the participants. Of greater concern was a tendency to run the conference as a meeting and not to allow enough time for the debates, discussion and exploration on which buy-in is based. These early conferences were of great value in clarifying the challenges to putting the approach to work. These early experiences also highlighted the importance of everyone understanding their roles within the process. The facilitator assists individuals and groups to find solutions without imposing or dictating the content or product of the group debate. A facilitator is responsible for protecting the process; this differs from the role of the Chairman. A conference needs to be chaired ’tightly’ to ensure the process is consistent, but in a manner that ensures all participants’ views are given appropriate consideration. Facilitators must be allowed to run conferences with minimum intervention from the Chairman. Participants

Page 40

also need to have adequate advance information to allow worthwhile discussion to occur. Cost data were gathered as part of the model preparation from the relevant expert sources. During the conferences a typical approach was for the DEC desk officer responsible for the options in an area to brief the stakeholders on these. Scores were then derived from the group discussions for the relative benefit of the options in each of these areas for each of the criteria. Weights were then assessed based on what each area contained. Following sorting, the order of priority based on scores and weights was shown to the conference. Sensitivity analysis for issues not resolved during the scoring and weighting followed. The DEC then summarised his understanding of what the key equipment lines were. The stakeholders recognised both the potential of the approach as well as the uncertainties of some data and the potential dangers for the model to be misused. The conferences of 2001 were run in several locations. Those who ran conferences in Central London found the facilities often lacking. The meetings rooms in the MoD were not air conditioned, acoustically poor and the setup was less than ideal. Those who hired dedicated facilities, such as conference suites in hotels, found the meetings ran better and were more productive enabling participants to take full ownership of the decision; networking during the conference enhanced people’s understanding of each other’s point of view.

2002-2004 DEVELOPMENT OF DECISION CONFERENCING Following the success of 2001 the method was refined. In 2002 the five criteria were reduced to three. In 2003 costs were treated with greater detail and critically the concept of an Annualised Cost was introduced. This took account of both the capital costs and the operating costs and enabled a Defence view rather than just the capital cost of the equipment to be accounted for. It reduced the difficulty of scoring a high cost item with a long in-service life against a low cost item with a short life. Dividing the cost by the planned in-service life re-

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

moved the need to mentally integrate benefit over the time in service when scoring options. This could be interpreted as analogous to the cost of renting and running the item in question for one year. This analogy improved the consistency of the comparison between options. Whilst 2001 had explored the two tier structure of DEC and JCB conferences, 2002 saw the first merged Equity model of all the DECs’ outputs presented to the JCB. This two tier process was repeated in 2003 and 2004. A key lesson from this period was ensuring that the scope of the DEC conferences covered enough options to provide real choice, particularly with regard to savings. Constraints on finances sometimes resulted in the order of priority becoming irrelevant because all of the proposed options had to be taken. Consequently and understandably, if unfairly, the efforts put into the MCDA process and the process itself were devalued. An external audit of the process by an independent expert from the University of Strathclyde was conducted each year to ensure that both the design and implementation of the Decision Conferencing process were valid.

Training The training changed from the broad foundation of 2001 to a focused use of Equity for the ECC, based on the design established during the first year. This allowed the 12 days to be compressed to 1 week. In 2002 12 new students attended at a room in a Ministry building in the centre of London. This exemplified all the aspects to avoid in a venue. The table was bolted to the floor and sat diagonally across the room. The provision of the training was split up amongst those of the 2001 group of students still in post supported by academia and industry. This resulted in no one trainer being present throughout the course, which was not helpful. Continuity was restored in 2003 and 2004, which also saw a change in venue for the training away from central London. This allowed focus on the Decision Conference to be maintained, whereas the 2001 and 2002 London locations invited distraction from other meetings or day to day business. A core group of trainers presented throughout both of these

Military Operations Research, V13 N2 2008

courses; this ensured coherence in the presentation of the material. The syllabus evolved to reflect the changes in the Equity software and was supported by a set of PowerPoint slides as a training aid. For 2002 an additional enhancement for the group was a set of aide memoire cards. These provided a quick reference after the training for the facilitator and analyst summarising technical and social techniques. The cards were amended throughout subsequent years.

Technical The biggest process change in this period was the method by which costs were viewed and the refinement of the benefit scores to just three that covered the spectrum of operations. Equity 3 was also introduced in 2003. The stability improved and the data usage became more assured and this added confidence for the users. Extra functionality was also included in the software that allowed for lower level models to be combined and assessed at a strategic level. Each single capability model becomes an area within the merged model. Over the years other tools have been discussed and evaluated, yet Equity is still seen to meet the needs of the MoD Decision Conference process and has remained the tool of choice.

Social Each wave of residential training established networks across the ECC for its participants. Additionally, the experience present within the MoD grew as people changed posts and spread knowledge of the process to new areas.

2005 - THE FALLOW YEAR In 2005 the introduction of Biennial Planning to the MoD led to the first break in the cycle of training and application of Equity based Decision Conferencing in support of the EP. The Decision Conferencing process was still seen to be of value for discussions of strategic direction and some areas used it in support of general capability management.

Page 41

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

2006 This was the first year for which biennial planning was practised. With the general pressures and the influence from the requirements to provide equipment to support current operational deployments then this planning round was different from any previous round. There were the normal mixture of savings and deferrals but also pressure to add new equipment items into the equipment programme in order to support the needs of the British Armed Forces deployed in Iraq and Afghanistan. Staff within the ECC are generally in post for 2–3 years so the change to biennial planning meant that in 2006 the number of staff with experience and first hand practise of undertaking MCDA and Decision Conferencing, using the Equity toolset, was less than a handful. The drain of knowledge was great and the lack of corporate memory was noticeable.

Technical Extensive retraining was required because of the limited number of people and so following on from the experience of previous training, an intensive 4-day course was delivered. As in

2003 and 2004 the training was delivered outside London so that the attendees could focus on the process of Decision Conferencing, without distraction from their normal work. The course style relied on classroom sessions reinforced by a number of practical sessions. At the end of the week the number of trained people who could act as analysts or facilitators had increased back to 20 people. By 2006 the Decision Conferencing process had evolved to a series of steps as described below (Figure 8) and the training took attendees through the process from beginning to end. The process is not linear; it covers all the steps required, but there can be some iteration around steps 2 and 3. The current MoD process step 3 is defined before step 2 and step 6, Weight Criteria, can only be done once the previous 5 steps have been completed. For MoD, Decision Conferencing is an integral part of the equipment planning process and follows on from Capability Audit and Balance of Investment studies, as shown in Figure 9. The practise from previous planning rounds continued. Individual capability areas assessed the proposed options against criteria covering the spectrum of conflict. The models

Figure 8: The 2006 Decision Conference Process

Page 42

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

Figure 9:

The UK MoD Capability Management Process

were built into a merged model allowing the priorities across Defence to be viewed. The second application of Equity Decision Conferencing in 2006 was to the prioritisation

Military Operations Research, V13 N2 2008

of some of the projects making up the Research programme. A similar two tier approach was used with the DECs and their scientific advisers considering the proposed items of research.

Page 43

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

These were then merged into an overall model and prioritised. As with the EP process in 2001 the new structure was an aid to decision making but it was also a large scale pilot testing the approach for subsequent development. The criteria were more limited with, for the pilot, two criteria for benefit in the form of: • Capability Impact (the extent to which the requirement improves military capability, taking into account the possible wider exploitation possibilities) and • Probability of Success (POS) (the percentage likelihood that the research will deliver the desired outcome(s) in the required timescale). The latter was then converted to a penalty score, based on the methodology developed by Prof Larry Phillips of the LSE (Probabilities to Preferences), in order to measure the perceived risk with the individual items of research. Penalty was defined as:

100*log10

POS 100

The resulting score was brought onto the same benefit scale as the other benefits. It was then subtracted from the positive weighted preference values. The earlier place of research in the generation of military capability, compared to the components of equipment capability, made it more challenging to assess. A key topic was how to value the success of the research against the subsequent effect on capability.

Social Despite the facilities now available within the Ministry, a lesson was again that the use of appropriate facilities is vital to the success of the decision making process. For some conferences, the rooms became overcrowded and discussion was stifled because attendees were uncomfortable. In other meetings, the attendees were too spread out and this had the same effect. The action taken to counter the loss of skills ensured that the running of meetings was effective. Certain individuals became comfort-

Page 44

able with the requirements of facilitation and as a consequence supported several meetings. Peer support was also vital and discussion overcame many apparent difficulties.

CHALLENGES - PITFALLS TO AVOID IN USING VFT/MODA Development Designing a comprehensive approach is required. The first attempts to implement the Decision Conferencing process have had elements of piloting in both the Equipment Plan and the Research Requirements. Each year has found new or extended techniques and these have been incorporated in the subsequent years’ activities.

Technical The move from five to three criteria was the most major change to the criteria used by the EP conferences. The three that are still used currently are those which assess the value of options to different types of operation, covering a range from high intensity to peacetime activities, defined as the DIO, OWO and PA criteria. The other two criteria functioned differently generating confusion and delay. As with any modelling tool, the issue is often the availability and quality of the data. For Equity the options require sufficient definition to understand their potential operational benefits and costs. The cost data, especially regarding the through life costs, or costs of ownership proved difficult in that there was not a comprehensive source for these. Where data are incomplete a set of assumptions is necessary in order to discuss relative merits of options. Whilst such assumptions may not prove to be absolutely accurate they are necessary for reliable relative assessments to be made.

Social There remains opposition in some instances to the value focused thinking process. The introduction of Decision Conferencing was

Military Operations Research, V13 N2 2008

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

not welcomed by all. These people fall into two distinct schools. There are those that try to make the process work for them by attempting to distort the results to support vested interests. The Decision Conferencing process is effective at bringing out the reasons why individuals hold their views. Some proponents’ ideas, whilst logical and consistent, do not stand scrutiny. The hidden agenda and a resistance to the Decision Conferencing process are exposed by the debates of the provenance of scores and weights. This is a key strength of the process. The simplest means to frustrate this scrutiny is to attack the validity of the method. In this there is a common interest with those who lack faith in the process. The vested interests seek a “right answer” from Equity. This can be generated easily but has little utility. The lack of justification for the scores and weights used to create the pleasing image is quickly exposed. Those cynical of the whole process do not believe that it can provide a valid result. There was palpable resistance to the soft approach because it was soft. The implicit assumptions and judgements often present in harder approaches are less obvious. Indeed there has been a tendency to look to hard models to be more objective. It has been noticeable that the number of cynics has diminished since 2001. This augurs well as the ultimate buy-in is for those who will rely upon this type of Value Focused Thinking to inform their decisions. Turnover of staff, both scientific and military officers interrupt the continuity of staffing the Decision Conference programme. The annual cycle fitted better with the two year postings. However the growth of knowledge within MoD means that the process is accepted by many and is often proposed for other applications. In the early years providing enough laptops proved surprisingly difficult; these were essential to running the conferences. The decreased cost and ready availability of laptops has addressed this issue. The process over the years has also been refined. The development of pre-processing databases has automated much of the data transfer and this has been assisted by the improvements in the software. Whilst the methodology

Military Operations Research, V13 N2 2008

is the same the software is a different product from that used in the first conferences. There continues to be a number of software tools that can support this process. As with any analysis choice of the most appropriate modelling tool is important. This needs to be combined with the appropriate processes and interfaces with any pre- or post- processing applications. The tools need to be stable in order to provide confidence to the user and at the same time need to be able to be used by breadth of users. Not all analysts have the same aptitude for using a computer. The venue can be an aid to the Decision Conference or a challenge. The refurbished Main Building has many high quality conference rooms, although their close proximity to the desks does expose the desk officers to distraction. Projectors especially have become standard features of conference rooms, particularly in the refurbished Main Building, although the Boardroom layout of the rooms, in which the tables cannot be moved, is still not ideal. In the earlier years there were few such facilities available. A review of the benefit metrics needs to occur on each iteration of the process. The demands on equipment capability from current operations have come to the fore and it has been debated how this is best captured within the current process. It could become a formal criterion and in terms of Equity this implies that there it could have a higher weight. However this has to be considered against the fact the EP has a longer term future horizon than the operations the UK currently finds itself fighting in. Decision Conferencing reflects the priorities of the decision makers and those stakeholders participating in them. This can facilitate the decisions made at the time and subsequently the understanding of later scrutineers of those decisions.

LESSONS INDICATED On the positive side, the ECC experience has been that great buy-in can be attained from the Decision Conferencing process. The move into research requirements and capability in-

Page 45

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT

vestigations suggests how the process can be further exploited. Audit trails for the decisions made can be found in the models. This offsets the biggest challenge, which is that of continuity. Interruptions affect both people and the equipment used requiring refreshing of both on a regular basis. The ECC application of Equity based decision conferencing was as an aid to strategic discussion. There were occasional interests in trying to use the tool as an accounting tool which fortunately were redirected.

CONCLUSIONS The Decision Conference process worked for the EP in 2001-2006. It is a strategic tool reflecting the opinions expressed to it. Central direction provided coherence to the process so that the two tiered structure could function. Local implementation allowed local issues to be reflected. Criteria must reflect the key concerns in enough detail to allow discrimination between the options. A simpler set of criteria, considering the benefit offered by options to a well defined set of scenarios, provided more utility than the broader set initially used. Evidence (data) is key to informed and robust results. The process requires capacity in the form of trained facilitators and analysts, with software and hardware. The latter includes the quality and quantity of the venues used. The development of Decision Conferencing in the United Kingdom Ministry of Defence

Page 46

since 2000 provides the best evidence of its utility. It complies with the maxim “Good work begets more work.”

ACKNOWLEDGMENT © British Crown copyright - Dstl 2008 published with the permission of the Controller of Her Majesty’s Stationary Office.

REFERENCES Department for the Environment, Transport and the Regions (DETR), December 2000. “Multi-Criteria Analysis - A Manual,” HM Government, 2002. “The Treasury Green Book” Defence Acquisition Change Programme, Ministry of Defence, June 2006, “Enabling Acquisition Change.” Philips, L and Bana e Costa CA, 2005, “Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing, Working Paper LSEOR 05.75.” Phillips, L., May 2004, “The Mathematics of Hiview and Equity”. Phillips, L Prof, May 2006, “Probabilities to Preferences” More information on Equity 3 can be found at www.catalyze.co.uk Sources on Decision Conferencing are available via: http://www.lse.ac.uk/ collections/decisionConferencing/ Default.htm

Military Operations Research, V13 N2 2008

ABSTRACT

B

attlefield commanders using the Future Combat Systems (FCS) will have a plethora of information available that will come from many sources. A lot of information is difficult to organize and analyze, particularly in a time sensitive environment. Dynamic Decision Networks (DDNs) are a good approximation to dynamic programming, considered by some the “gold standard” in decision making optimization. They are an ideal aid for a decision maker faced with multiple uncertain variables and conflicting objectives and many opportunities to collect information, particularly when some of it may be imperfect. Thus they are a good alternative for military decision makers. DDNs are briefly described in this paper. Then, current research on the applications and utility of DDNs to the FCS are discussed. Specifically, the goal for this research is to build DDN decision aids for the leaders at all echelons within the FCS family of vehicles.

INTRODUCTION Given the tremendous amount of information available to commanders using Future Combat Systems (FCS), their dilemma will be to find as close to optimal decisions as possible throughout mission duration. As described in FCS Operational Requirements Documents, FCS commanders require cognitive decision aids to enable rapid decision-action cycles with little time and effort required to try to understand what is happening. A decision support aid should provide insights for what information is most valuable, how to collect this information, which decisions to make, and when to make the decisions. Most research efforts fail to adequately model decision problems having many variables such as those that exist in military decisions, particularly when the amount of uncertainty varies over time. This paper first briefly discusses the Army’s new FCS and why it is so important for a decision tool to be incorporated into it. Next, a new technique for solving decision problems called Dynamic Decision Networks (DDNs) is summarized. The paper then discusses how DDNs could be used to increase situational awareness in a FCS sce-

Military Operations Research, V13 N2 2008

nario. The DDN technique was developed by Buede (1999) and tested by Kobylski (2005). The scenarios described are the first use of the technique in military scenarios. Finally, the paper summarizes our ongoing research efforts on this topic.

BACKGROUND FCS will operate as a system of systems consisting of a family of networked air and ground based assets that will include manned and unmanned platforms (Schenk et. al., 2004). The most basic fighting organization under the Army’s FCS is the Brigade Combat Team (BCT). The intent is that the BCT provide units and soldiers with “unprecedented military capability for full spectrum operations” (Schenk et. al., 2004). Instead of waiting for the information to come from higher echelons, soldiers within the BCT will achieve situational awareness through direct collection and integration of intelligence (Schenk et. al., 2004). To do this, warfighters in the BCT will have many sensors that they did not previously have. The goal is that information from these organic as well as networked sensors from lower echelons and sister cells will be processed and integrated with information from higher headquarters resulting in information superiority. This information superiority will permit the BCT to make more timely and effective decisions to ultimately destroy enemy systems in its area of operations. A natural question is “Why does a commander need a decision tool?” There are two requirements stated in the FCS requirements document that address this question. The first states that the FCS command system must include automated decision aids to assist in developing situational understanding and facilitating target development. The second requirement states that the FCS must provide problem solving tools and decision aids for all BCT echelons. It also states that these decision tools and decision aids must support integrated battlespace management to include maneuver, lethal and non-lethal effects, sustainment, and mobility/countermobility. Other reasons for a decision tool for an FCS commander are recent test results from the field on some initial experimentation

The Use of Dynamic Decision Networks to Increase Situational Awareness in a Networked Battle Command Gerald C. Kobylski, Ph.D., P.E. United States Military Academy [email protected]

Dennis M. Buede, Ph.D. Innovative Decisions, Inc. [email protected]

John V. Farr, Ph.D., P.E. Stevens Institute of Technology [email protected]

Douglas Peters Applied Research Associates, Inc. [email protected]

APPLICATION AREA: Decision Analysis OR METHODOLOGY: Decision Analysis

Page 47

THE USE OF DYNAMIC DECISION NETWORKS

efforts for the preliminary FCS command and control (C2) system. This effort is first briefly discussed in the following paragraphs. The Army and its FCS contractors are on an extremely short timeline with an evolving baseline for the fielding of the FCS. Thus, numerous experiments on different systems and platforms within the FCS have already been conducted. One of these experiments involved C2 for the BCT. In October of 2000, the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Communications-Electronics Command (CECOM) teamed up to begin to define and then test a C2 system for a unit cell, the unit cell being the smallest indivisible unit in the BCT. To do this the program was designed in two efforts, the first of which was to design the functional requirements for the C2 system. The second effort was to produce prototype C2 software in order to explore C2 information and control needs in a notional FCS force. The goal of this program is to understand the required features and capabilities that must be available to a future force commander. In order to test the prototype software, a series of free-play Human-In-the-Loop discovery experiments were conducted. Two pictures of a notional C2 vehicle as modeled in the experiment are shown in Figure 1. The prototype equipped four warfighters with network capability inside the command vehicle (picture on the right). Each warfighter had their own display which consisted of two displays that could be independently configured according to personal preference and staff responsibility. A fifth display was placed in the front of all of the

Figure 1

Page 48

warfighters and usually showed the commander’s view of the battlefield but could be quickly displayed on any staff officer’s screen to facilitate collaboration within the cell. These four warfighters were the unit cell commander (overall command and control), the information manager (intelligence), the effects manager (non line of sight systems), and the battlespace manager (line of sight systems and future warriors). The picture on the left in Figure 1 depicts a notional view of how the setup would fit into a command vehicle. One of the findings from the C2 experiment was that operators complained of excess screen clutter on many occasions because of the amount of information available to them. A result of this screen clutter was that commanders became focused on the many reports and lost sight of the big picture. Another finding was that the commanders in the experiments frequently became consumed with the information from the many sensor reports. Like with the first finding, this resulted in the commander losing focus on the larger battlespace picture. A third finding from the C2 experiment concerned automatable decisions. An automatable decision was defined as when “all variables (are) known or can be calculated, the best option can be selected by tabulating scores - something a computer can do.” (Experiment 4 Phase 1 Analysis Report, pg. 41). At the conclusion of this experiment, it was determined and noted in the same report that 43 of the 173 decisions made by the commanders were automatable. Thus, if a decision tool were available to help recommend good alternatives for these deci-

FCS Command and Control Experimentation

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

sions, commanders or leaders could spend more time on those complex decisions that demand a clear understanding of the battlespace.

DYNAMIC DECISION NETWORKS In order to make the procedure for determining the solution of dynamic decision making problems computationally tractable and time efficient, Buede (1999) proposed DDNs. DDNs are designed to solve real time problems in a dynamic environment that have a nonconstant number of variables that evolve with time. A dynamic environment is defined as one in which there are numerous time periods when the decision maker makes decisions. These decisions not only affect uncertainties in the present but also uncertainty and decisions in the future. In dynamic decision making, the amount of uncertainty can change from period to period; in new periods some uncertainties may be resolved and some new ones may develop. A DDN is essentially an automated decision aid that recommends near-optimal alternatives based on past and present information. The DDN is not designed to model all types of problems, only those that are repetitive in nature. The reason for this will become evident later in the paper. A full description of DDNs can be found in Costa and Buede (2000) and also in Kobylski (2005). A short description of the methodology is explained in the following section. Since DDNs are based on Bayesian Networks (BNs) and Influence Diagrams (IDs), BNs and IDs are first summarized so that DDNs may be better understood. Because of its simplicity, the example selected to describe BNs, IDs, and DDNs is the weapon-level targeting problem. Most other decisions made in the FCS environment, particularly those made at higher echelons, will be more complex in nature.

Figure 2

Military Operations Research, V13 N2 2008

Bayesian Networks and Influence Diagrams BNs are graphical structures that are mathematical expressions of mental models of the world. These graphical structures form a network framework that model uncertainty by mapping cause-and-effect relationships among variables that are of interest to the decision maker. The purpose of Bayesian models for decision support systems is to give estimates of uncertainties for events whose outcomes are not available, or are observable at a high cost; these events are called hypothesis events (Jensen, 1996). When programmed into computers, BNs automatically update inferences based on new information even when key pieces of information are missing. Jensen (1996) defines several properties of a BN: 1) a BN consists of a set of variables and a set of directed edges between these variables; 2) each variable has a finite and mutually exclusive set of states; and 3) for each variable, there is an assigned conditional probability distribution with its parents. These conditional probabilities represent the strength of the dependencies between the variables and their parents. If there is no arc between nodes, then this implies conditional independence (Jensen, 1996). An example of the simplest BN that illustrates these properties is shown in Figure 2 utilizing the Netica software package by Norsys Software Corporation. There are two random variables of interest in this BN each with two finite and mutually exclusive states. Target 1 is a random variable because it is unknown whether the given target is friendly or enemy. Target activity 1 represents a report on whether the target is moving or stationary. Target 1 is not conditioned on any other outcomes so it does not have any parents. Its marginal distribution is

p共target 1 ⫽ friendly兲 ⫽ .4

A Simple Bayesian Network

Page 49

THE USE OF DYNAMIC DECISION NETWORKS

and

p共target 1 ⫽ enemy兲 ⫽ .6. As shown in Figure 2, the target’s activity is dependent on whether the target is friendly or enemy. The notional conditional probability distribution that we input into Netica is as follows:

p共target activity 1 ⫽ moving兩target 1 ⫽ friendly兲 ⫽ .6 p共target activity 1 ⫽ stationary兩target 1 ⫽ friendly兲 ⫽ .4 p共target activity 1 ⫽ moving兩target 1 ⫽ enemy兲 ⫽ .3 p共target activity 1 ⫽ stationary兩target 1 ⫽ enemy兲 ⫽ .7 Note that these conditional probabilities can only be seen when one is in the Netica program. Netica automatically computes the target activity’s marginal distribution and is shown in Figure 2 as p(target activity ⫽ moving) ⫽ .42 and p(target activity ⫽ stationary) ⫽ .58. The standard Bayesian problem is as follows: given some evidence, calculate the probability of various states of the hypothesis. In the example of Figure 2, the problem would be to calculate the probability that the target is friendly or enemy given the target’s activity, moving or stationary. In order to determine this, Bayes’ rule is utilized to compute the new

Figure 3

Page 50

conditional distribution of target 1 given the target’s activity. Figure 3 illustrates in Netica a slightly more complex BN that estimates from which target the greatest danger is coming given reports on the targets’ type (tank or wheel) and activity (moving or stationary). Findings or instantiated reports can be been entered in nodes as a 100% probability if they are know with certainty. A report is instantiated when its outcome is known prior to the decision being made and thus the information can be included when solving for the optimal decision alternative. One can then deduce which nodes or events have a higher probability of occurring. For example, if it is known that target 1 is stationary and a wheeled vehicle, based on the conditional probabilities that have been included in this model, the marginal probabilities in the Dangerous Mission node would reflect that the danger is most likely coming from target 1 as opposed to target 2. Earlier it was mentioned that the conditional probabilities that have been inputted into the model can only be seen when one is in the Netica program. Figure 4 below shows a Netica screenshot that illustrates the conditional probability table for the node called Dangerous Mission in Figure 3. For example, the probability that danger is coming from 1 is .02 (shown as 2.0%) given that Targets 1 and 2 are friendly. Like BNs, IDs have also proven immensely valuable in the applications of decision analysis. A useful discussion on IDs can be found in Howard and Matheson (1981), in Shachter (1986), and in Tatman and Shachter (1990). IDs share the same graph theory as BNs in that they

A More Complex Bayesian Network

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

Figure 4

Conditional Probability Table

have the same representation of a set of random variables and their joint distributions. The chance nodes adhere to the criteria discussed above with BNs; they represent random variables of interest that must have two or more distinct states. In addition to chance nodes, IDs contain two other types of nodes, decision nodes and value nodes (Shachter, 1990). The decision nodes contain alternatives that are comprehensive, distinct, and mutually exclusive courses of action available to the decision maker. If an arc

Figure 5

Military Operations Research, V13 N2 2008

enters a value node, then the outcomes or alternatives of the node from which the arc is emanating from have some value. The value is represented by some function contained in the value node. This value function allows the ID to compute the expected utility of the decision alternatives across all objectives. The objectives are usually in the form of uncertainty nodes where each outcome contains some value. Figure 5 illustrates an example of a simple ID again modeled in Netica. This figure models a decision whether or not to engage a target,

Simple Influence Diagram

Page 51

THE USE OF DYNAMIC DECISION NETWORKS

and if engaging, which target to engage. The decision node is the lightly shaded box. The value node is labeled “value” and is a function of whether or not the mission is accomplished; accomplish mission is the single objective in this scenario. A notional value of 200 is assigned for when the mission is accomplished and a value of zero when it is not accomplished. The numbers 200 and zero are arbitrary. The intent behind selecting these values is to show that there is no value to the decision maker when the mission is not accomplished and significantly more value if the mission is accomplished.a This assignment of values is not observable in Figure 5. To view these values one must be able to access the decision node in Netica. Mission accomplishment is dependent on which decision outcome is selected and from which target the danger is coming. The optimal decision in this example is either the first or the second alternative and has an overly precise expected value (EV) of 134.372. The reason the EVs of the alternatives are equal is because there is an equal probability of the danger being from target 1 or from target 2. When information becomes available as shown in a future example, one of the alternatives will have a higher EV. As discussed by Kjaerulff (1992), BNs and IDs are successful in modeling problems when the environments are static in nature; for those that are dynamic in nature like military scenarios, IDs and BNs are not very efficient. Howard et. al. (1998) discusses how one could connect IDs and BNs representing different stages with arcs or temporal relations, but that this would be very difficult to solve because of its computational intractability. Buede (1999) envisioned DDNs solving a specific class of problems that are defined by three criteria: 1) an evolving decision sequence where the same decision is made during every time slice;b 2) a large quantity of information arrives in every time slice and is relevant to the decision being made in the next time slice; and 3) an additional decision can be made on how or what information to collect prior to the decision for the next time slice. This information gathering decision affects the resolution of uncertainty in the next

Page 52

time slice, thus providing a dynamic link between time periods. The additional decision of what information to collect makes other decision tools even more cumbersome for these types of problems. Note that in our work with DDNs, we define the beginning of a time slice when new information arrives and a decision is made to either to do something or nothing. As will be seen later in the paper, the size of the time slice does not affect the efficiency and effectiveness of the DDN approach for laboratory research. However, in practice the size of time slice should occur with points in time when the decision maker can select a new alternative or when the decision maker has enough new information to consider selecting a new alternative. DDNs are an ideal aid for a decision maker faced with multiple uncertain variables and conflicting objectives and many opportunities to collect information. DDNs can easily be applied to the decision making process within the FCS to address various scenarios that change rapidly. Some examples of decisions modeled during our work are shown in Table 1. These examples are meant to demonstrate the repetitive nature of these decisions. A general model of a DDN for a military scenario adapted from Costa (1999) is shown in Figure 6. This model illustrates that the DDN consists of three separate nets (shown in boxes from top to bottom in Figure 6): the decision net, the data fusion net, and the inference net. The inference net is composed of a set of BNs that receive and analyze information from external sources and then make inferences. For this DDN, there is a BN in the inference net for every target that is in the area of operations. The purpose of each BN, which is identical and independent to the other BNs in the inference net, is to assess the level of threat associated with its target. As soon as a new target is detected, a standard BN for it is added in the inference net; this is one way where the DDN adapts to a changing number of variables in the decision problem. A second way is the variability in the number of uncertainty nodes in each BN. An uncertainty node in each BN represents information received from some report and is a function of the decisions on what information to collect in the prior time slice.

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

Table 1

Sample Decisions That Have Been Modeled Platform

Decision Modeled

Unmanned Ground System (UGS) UGS

How to monitor an area/ point of interest? How to check on a sensor that is giving no signal? Fire? Resupply?

Non Line of Sight Mounted Combat System

Unmanned Air Vehicle Reconnaissance Surveillance Vehicle (RSV)

Figure 6

Should one retask based on new information? How to maintain contact with the enemy?

With a sensor or sentry Leave alone, send a small security team, or a large security team Yes or no Resupply, Cross level, resupply and cross level, or continue with mission Divert, maintain mission, or recall With a UAV, a small UGS, a patrol, or an RSV

General Model of a DDN for a Military Scenario

Like the inference net, the fusion net consists of one or more BNs. This net fuses together all of the information from the BNs in the inference net and then passes the information onto the decision net. The decision net consists of one or more IDs which define the optimal decision for a given time slice. The number of alternatives in the decision node of the decision net’s ID is dependent on the number of BNs or targets.

Military Operations Research, V13 N2 2008

Alternatives

Figure 7 shows the first time slice for a more specific DDN modeled in the software package Netica. This model is exactly the same as the ID shown in Figure 5; the differences in the DDN methodology will be discussed shortly. The decision is again whether or not to engage two targets that are in the area of operations. As can be seen, there are two BNs in the inference net, one for each target. The fusion net consists of only one

Page 53

THE USE OF DYNAMIC DECISION NETWORKS

Figure 7

DDN First Time Slice

node, the dangerous mission node, because there are only two BNs in the inference net to fuse together information. The decision net consists of three nodes, the decision node, the accomplish mission node, and the value node. The accomplish mission node is the objective for this decision problem and contains value. Since one of the alternatives in the decision node is always do nothing, or in this case “Do not engage,” the DDN can recommend postponing the decision by making the do nothing alternative optimal. Thus, like IDs, the DDN not only recommends the optimal decision in every time slice, it also recommends when to make this decision. It can now be seen that in the DDN the BNs act as inference engines which model the uncertainties involved with different entities, while the IDs provide a mechanism to determine the optimal decision that maximizes the decision maker’s objectives (Costa, 1999). The end of a time slice for a DDN occurs when a decision has been made. Initially, the process for constructing new time slices for the DDN was done manually using available software. More recently, our team has developed software to implement the methodology. Following is a description of the methodology of the construction of the second time slice, given that the decision in the first time slice has al-

Page 54

ready been made. First, the DDN structure for t ⫽ 2 is created. This structure is an exact replica of the structure at t ⫽ 1 before any reports were instantiated and decision alternatives selected. Next, conditional arcs representing appropriate dependencies are modeled between the t ⫽ 1 and t ⫽ 2 structures. An example of these dependencies is shown in Figure 8. There are three arcs identifying dependencies between uncertainty nodes from the two time slices. For example, if in t ⫽ 1 target 1 is likely to be an enemy, then it is also likely that the same target will be an enemy in t ⫽ 2. The actual number of arcs between the time slices is dependent on the situation. The arc between decisions illustrates the order the decisions are made. The report vehicle type 1 in Figure 8 has been instantiated so that the reader can see how distributions change in the network. When this report is instantiated, the marginal distributions are different for most uncertainty nodes in t ⫽ 2 compared to their original marginal distributions in t ⫽ 1. Figure 8 illustrates how an ID models a decision problem with two time slices. As can be seen, the ID considers decisions and uncertainties from the future when determining optimal policies in the present. This adds to the ID’s inefficiency in solving dynamic decision problems. Note that the DDN

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

Figure 8

Forming the Second Time Slice before Absorption

did not consider decisions and uncertainties from the future when determining optimal policies in the present. The final step in the formation of the DDN’s second time slice is to absorb all nodes from t ⫽ 1, a process more fully described by Kobylski (2005) and by Shachter (1986). In probabilistic terms absorption is referred to as summing out the variables from the joint distribution. Summing out the variables in t ⫽ 1 occurs after all applicable reports have been instantiated and the decision maker has selected the optimal decision alternative. The absorption process removes all desired uncer-

Figure 9

Military Operations Research, V13 N2 2008

tainty nodes while maintaining the global relationship of the remaining uncertainty nodes. In this way, the structure in t ⫽ 2 maintains all of the information from t ⫽ 1. The resulting second time slice of the DDN is shown in Figure 9. The marginal probabilities in t ⫽ 2 now reflect the decision made and all of the information from t ⫽ 1. Note that all marginal probabilities are exactly the same in the uncertainty nodes as in those in t ⫽ 2 from Figure 8. Note that in the absence of any new information, a decision maker should engage target one because this alternative has the highest EV. Since this alternative has the highest EV,

The DDN’s Second Time Slice after Absorption

Page 55

THE USE OF DYNAMIC DECISION NETWORKS

the decision maker can expect that alternatives such as this will bring the most value over the course of many such decisions. Earlier it was pointed out that the size of the time slice does not affect the efficiency and effectiveness of the DDN approach. One can see that the length of time slice does not affect in any way the number of computations required. If a new target enters the operating picture, then a new model is created. Essentially, a BN representing this new target is added to the model. Likewise, if a current target leaves the operating picture, then a new model is created by deleting its BN from the model. The software that we are currently developing is able to efficiently add or delete BNs representing new targets or if targets leave the area of operations. The software also automatically updates all conditional probability distributions in the model to reflect the addition or subtraction of the BNs. Similarly, if in t ⫽ 1 the decision maker chooses to send out more or less sensors, then in t ⫽ 2 our software program will either add or delete reports in each BN based on this information gathering decision.

THE USEFULNESS OF DYNAMIC DECISION NETWORKS A standard technique in decision making optimization is Dynamic Programming (DP) and so it was selected as the technique to compare to DDNs. The essence of DP is to optimize the value of a decision sequence where the decision maker considers the future when making the decision in the present. As a result, “the best alternative in the short term is not necessarily the best in the long term” (Costa and Buede, 2000). This makes DP a good technique for analyzing sequential decisions. In sequential decision making problems, actions taken now may “preclude or initiate other opportunities or restrictions in the future;” thus, decision makers must analyze not only the consequences of present decisions, but also anticipate their implications for the future (Bunn, 1984). Unfortunately, DP is highly intractable and inefficient for even the simplest realistic combat decision problems and thus is not a realistic decision tool for use in military decision mak-

Page 56

ing. This is because DP examines all of the possible alternatives of the sequences of decisions resulting in a solution that becomes more complex as the number of alternatives and states increase. Other approaches to approximating DP solutions are reinforcement learning, and factored Markov Decision Processes (MDPs). Reinforcement learning (Sutton and Barto, 1998; Kaelbling et. al., 1996) is a “trial and error” approach to learning the optimal solution in a dynamic environment by matching situations to decisions and the associated signal for numerical reward. Much of reinforcement learning builds upon DP and related approaches such as MDPs. Our DDN approach does not address learning explicitly but will try to address learning some of the conditional probability tables from relevant data in the future. Factored MDPs (Guestrin et al., 2003) attempt to reduce the exponential growth of the state space of DP calculations but require an approach to problem formulation that is not compatible with the complex military environment we are addressing. Unlike DP, DDNs are very efficient for complex problems that involve multiple time slices and thus are a good alternative for military decision makers. The advantage of the DDN is that they can handle a very broad range of factors very quickly, which is what the tactical decision maker needs. Although the DDN incorporates previous information into its calculations, it does not attempt to model future time slices as DP does. Instead, the DDN only analyzes one time slice at a time. This analysis includes all information from previous time slices. With DDNs a decision maker would not disregard an optimal decision for the short term so that a better value could be obtained over the long term. Thus, DDNs work well for planning in the short term but do not work as well as DP when one must consider all of the decisions in the long term. Because DDNs only analyze one time slice at a time, if a decision maker receives a lot of new information or if there is a nonconstant number of time evolving variables, DDNs can rapidly update probability distributions and determine the optimal decisions. A major reason for this is that when new information arrives, the new computation involves

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

only the present time slice and not all of the future time slices as with DP. For example, assume a decision problem involves two time periods. In determining an optimal decision for the first time slice, DP would consider the decisions and uncertainties in both of the time periods. Although not as inefficient as DP, an ID would also consider the decisions and uncertainties in the both of the time periods when determining an optimal decision for the first time slice. The ID for a two time period scenario might look something like Figure 8 (Tatman and Shachter (1990)). DDNs are more computationally efficient than DP and IDs because they only consider one time slice at a time; each analysis includes all present information and information that has been absorbed from the past. Kobylski (2005) showed that DDNs frequently provide the same optimal policy as DP. To do so, a methodology was developed to calculate the probability of policy agreement between the DDN and DP for a given scenario. Probabilities of agreement were computed and are shown in Table 2 for scenarios involving two, three, four, and five time periods. A conclusion that can be made from Table 2 is that the more time periods considered, the more inaccurate the DDN’s approximation is to DP. However, one must realize that DP requires perfect information over all of the time slices to make an optimal decision and that

Table 2

Probability Computations

Time Slices

Probability of Agreement

2

0.92 0.996 0.88 0.98 1.0 0.66 0.86 1.0 1.0 0.66 0.78 0.92 0.99 1.0

3

4

5

Military Operations Research, V13 N2 2008

DDNs make the best decision based on all of the information available at the time. Since one will never have future information in the real world, this level of time-slice suboptimization is probably the best one is ever going to achieve in a combat environment unless one simulates future expected outcomes and includes those results as part of the decision consideration.

MODELING AN FCS SCENARIO WITH DYNAMIC DECISION NETWORKS In the current BCT design, there are many different types of leadership positions from the various echelons and units. There are also numerous operators that man different types of weapons systems. The leaders in these positions and operators of these systems make numerous decisions on a routine basis. In the first phase of our research effort we constructed 20 DDNs for the critical and time sensitive decisions made by the combat platoon leaders in the BCT and by the operators of the critical weapons systems. Some of these were shown in Table 1. The platoon leaders we focused on were in combat units within the BCT to include the mounted combat system and the infantry companies, the reconnaissance troop, and the mortar and non line of sight batteries. In the remaining part of this section a simple FCS scenario is modeled with a DDN. A first step in the modeling process for a decision problem is to identify the decisions and alternatives. The C2 experimentation cell identified several hundred decisions that were made during the experiments. These tactical decisions were made at the platoon command level and at the weapons operator level. In the following example shown in Figure 10, three decisions in a given time slice were selected. Recall that DDNs are designed for those decisions that are repetitive in nature. Also recall that within a particular time slice, the DDN can model a second decision, the decision on what information to collect or how to do it. The first decision selected in the example was whether or not to engage a target and if engaging, which target to engage given two

Page 57

THE USE OF DYNAMIC DECISION NETWORKS

Figure 10

DDN Modeling an FCS Scenario

targets. The second decision was what weapon system to fire, given that the decision to engage was made and that a line of sight (LOS) system is available. Non line of sight (NLOS) systems and mortars could have also easily be modeled. The two systems available for firing were the Smart Cargo and Multi-Purpose Extended Range Munition (MP-ERM) systems. And finally, the third decision for a given time slice was whether or not to deploy a Robotic Ground Surveillance Radar (Robo GSR). Another step in modeling decision problems is to determine the objectives. These objectives can be modeled as uncertainty nodes with the outcomes true or false, true being the objective has been obtained and false being the objective has not been obtained. The objective hierarchy that was developed for this scenario is shown in Figure

Figure 11

Page 58

11. The all encompassing objective is victory. This objective can be broken down into five subordinate objectives and are self-explanatory. In Figure 10 these five objectives are represented as uncertainty nodes shown just below the decision nodes; all have arcs that enter the value node. The weighting of these objectives is shown in Table 3. These weightings became the amount of value each objective contributes to the value node if the objective is met. Zero value was contributed when the objective was not met. Later in our research we modified the value model in Figure 11 with a more general value model shown in Figure 12. We are now able to use this new model for many different decisions and scenarios. This new model includes the major objective of “accomplish the mission”

Possible Objectives Hierarchy

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

Table 3

Weighting of Objectives

Accomplish Avoid Detect Sufficient Sufficient Mission Fratricide Ammo Recon Assets 1.5

1

1

.5

.5

from the early value model. The casualty objectives in the updated value model replace the avoid fratricide objective in the original model because they are more general and all encompassing. We realized in building the many DDNs in the previous year that there are many types of scarce resources that may be relevant for any specific decision. Also, we realized it was not appropriate to include resources in the objectives hierarchy because having resources is a means, not an ends. Conserving resources until they are most needed is an end however, so we added an objective called “opportunity costs” for the use of all scarce resources. Now the use of scarce resources is valued by the opportunity cost objective, tied to the appropriate decision nodes in the model, and tracked elsewhere in the DDN. Although the “detect” objective is applicable for certain decisions, it was removed in our general value model so that the general model would be applicable to many more decisions. After the decisions, alternatives, and objectives have been determined, the next step in the modeling process is to determine all of the important uncertain events and the outcomes associated with these events. These uncertainty events are modeled as uncertainty nodes with appropriate dependencies. For this scenario, in addition to the objective nodes defined earlier, there are several other uncertainty nodes shown in Figure 10. The top row of uncertainty nodes represents the availability of each of the two weapons systems and of the reconnaissance asset. The outcomes of these nodes represent the number of each on hand. If the decision maker decides to utilize one of the assets, the DDN will update the numbers automatically for future time slices. These quantities, as well as all conditional and marginal probabilities, are completely arbitrary.

Military Operations Research, V13 N2 2008

Since there are two targets that are in the operating picture, there are only two BNs modeled in the inference net of the DDN, one for each target. Because there are only two targets, it is only necessary to have one node in the fusion net. In this case the node is labeled Danger from Target. Within each BN there are six uncertainty nodes. The primary node in each BN that feeds information into the fusion net is the Danger from Target 1 (or 2) node. The other five uncertainty nodes in the BN model reports that might be available to the decision maker. The outcomes of these reports help determine whether the target is dangerous. Three of these reports are the same as discussed previously, whether the target is friendly or enemy, the target’s activity, and the target’s type. In the target type report, a third outcome has been added, personnel. Of the two new reports, the first report is the distance the target is to the decision maker. The outcomes for this uncertainty node were discretized into three ranges, zero to twenty km, zero to fifty km, and greater than 50 km. These distances are based on the ranges of the weapons systems that are being considered to fire. The second report that was added involves battle damage assessment (BDA). The outcomes of this report are mobility kill, firepower kill, kill, and no damage. Given the uncertainty nodes that have been modeled, the next modeling step is perhaps the most difficult. It is to establish all appropriate conditional relationships between the nodes in the model. The conditional relationships between the uncertainty nodes shown in Figure 10 are self-explanatory. For example, the uncertainty node called danger from target 1 is conditionally dependent on target 1 being destroyed; if target 1 has been destroyed, it is not likely that there is danger from target 1. Once these relationships have been established, marginal and conditional distributions must be elicited from the decision maker. All probabilities in this model are notional. The probabilities in the conditional probability tables can typically be derived from two domain sources: (1) technical domains (e.g., physics and engineering) such as the weather effects on the flight of a UAV or (2) the military domain of tactics and threats. The probabilities

Page 59

THE USE OF DYNAMIC DECISION NETWORKS

Figure 12

Refined Objectives Hierarchy

from the technical domains will be very stable. The probabilities from the military domain will have to be updated for each campaign and may need to be updated during a campaign as the threat and friendly forces update their tactics. For this military domain, we are working on an operational concept for supporting the military decision makers in updating these probabilities. We are also developing ideas with other colleagues for how to capture outcome data from the campaign as part of new C2 systems or situation reports and update these conditional probabilities automatically using learning algorithms. Alternatively, we could implement web-based probability elicitation sessions with those soldiers having the most current experience.

Figure 13

Page 60

As discussed previously, there are three subnets within the DDN: the inference net; the fusion net; and the decision net. In this example, the decision net includes all nodes above and including the value node. As mentioned, because there are only two targets in the model, only one node is necessary in the fusion net, the danger from target node. The inference net consists of all of the nodes below the danger from target node. An example of how the model changes when information is entered into the model is shown in Figure 13. The reports entered are that target 1 is armored and hiding and that there are 20 rounds available of the MP-ERM and Smart weapon systems. Compared to Figure 10, it is seen in Figure 13 that it is much

DDN with Information from Reports

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

more likely that the danger is coming from target 1; thus, the recommended alternative is to engage target 1. If the distance for target 1 is entered as zero to twelve km as shown in Figure 14, the recommended LOS weapon system is the MP-ERM weapon type; this has been selected in the figure. Also shown in Figure 14 is that there are two Robo GSR recon assets available and that the recon asset is not in the range of the enemy air defense artillery. Once all applicable reports have been entered and the optimal decisions have been selected, the DDN structure for t ⫽ 2 is created. This structure is an exact replica of t ⫽ 1 before any reports were instantiated with information and decision alternatives selected. If there are any dependencies between the uncertainty nodes from the different time slices, then conditional arcs between the time slices representing these dependencies are added to the model. After the decision maker has instantiated all applicable reports and selected the optimal decision alternative in t ⫽ 1, the construction of the next time slice is finalized. In order to do this, the first time slice is absorbed so that they are no longer visible in the model for the model in t ⫽ 2.

Figure 14

Military Operations Research, V13 N2 2008

Since DDNs are so new, prior to our recent work there was no software to implement the above methodology; to implement the above DDN methodology, the Netica software package was manipulated manually. The software program we developed automatically updates the DDN models as the decision maker progresses from time slice to time slice. The software program also updates models as the situation develops with more or less targets and reports. This updating includes adjusting the decision alternatives, the value structure, the number of nodes and arcs, and the conditional probability distributions represented by the arcs. Recall earlier in the paper it was mentioned that the DDN is not designed to model all types of problems, only those that are repetitive in nature. In the scenario of the above example, the repetitive decision involves the weapon-level targeting problem. The reason the decision must be repetitive in nature is so that all or as many as possible of the possible uncertainties, outcomes, and decision alternatives for the given scenario can be modeled (and later accessed) in the software. If the decision was not repetitive in nature, all of the possible uncertainties, outcomes, and decision alternatives

DDN with Additional Information from Reports

Page 61

THE USE OF DYNAMIC DECISION NETWORKS

could not be anticipated in the modeling phase.

FCS warfighter for using DDNs is that large amounts of information will become an asset rather than a nuisance.

CURRENT EFFORTS One of our efforts is to find the right set of decisions made within the FCS that can be modeled with DDNs. We are currently focused on developing DDNs for the Unmanned Air Vehicle (UAV) command and control structure. This effort is in collaboration with the Battle Command Applications Division, Unmanned C2 Systems Branch from Battle Command. Specifically, we are modeling autonomous decisions that can be made by the UAV such as whether to retask to a new mission or stay on task with the current mission. Another one of our efforts is to further refined the DDN software. The refinement will include facilitating user interaction. We are also focusing on integrating the prototype software for the DDN with the UAV software and with the FCS System of System Common Operating Environment (SOSCOE).

CONCLUSION As the technology used in solving decision problems has advanced, so has the sophistication of methods for solving these problems. Although much research has been devoted to developing methods to assist decision makers analyze decisions, few approaches have been proposed for complex dynamic decision problems such as those encountered in military scenarios. For most of the approaches that have been proposed, the determination of optimal solutions is computationally intractable for problems with just a few decisions and random variables. Kobylski (2005) showed that DDNs provide a very good approximation to DP, one of the most commonly used techniques in decision making optimization, particularly for those decision problems where there is a plethora of information and when the decision must be made quickly. Information from a commander’s sensor assets and from different levels of command within the FCS can be integrated (fused) by the DDN. The payoff to the

Page 62

ACKNOWLEDGEMENTS The authors are very appreciative for the support of Gary Sauer, the previous DARPA FCS C2 Program Manager. The results of the experiment were very insightful and reinforced to us the need for a decision tool.

DISCLAIMER The views expressed herein are those of the authors and do not purport to reflect the position of the United States Military Academy, the Department of the Army, or the Department of Defense. Approved for public release; distribution is unlimited. Case GOVT 07-7165, 11 December 2007.

NOTES a. Kobylski (2005) found that when determining the decision maker’s values for decision outcomes, the weighting of each outcome is what affects the optimal alternative. The actual numbers used has no impact. Thus it would not make a difference in determining the optimal alternative if the numbers 200 and 100 were used versus 2 and 1. b. Time slices and periods are used interchangeably throughout the paper.

REFERENCES Buede, D. M. 1999. “Dynamic Decision Networks: An Approach for Solving the Dual Control Problem,” Spring INFORMS, Cincinnati, OH. Bunn, D. W. Applied Decision Analysis. New York: McGraw-Hill Book Company, 1984, 181. Costa, P. C. G. 1999. The Fighter Aircraft’s Auto Defense Management Problem: A Dynamic Decision Network Approach. Master’s Thesis. Department of Systems

Military Operations Research, V13 N2 2008

THE USE OF DYNAMIC DECISION NETWORKS

Engineering and Operations Research, George Mason University, Washington, D. C., 30. Costa P. C. G., and D. M. Buede. 2000. Dynamic Decision Making: A Comparison of Approaches. Journal of Multi Criteria Decision Analysis. 9, 257. Guestrin, C., Koller, D., Parr, R., and S. Venkataraman, S. “Efficient Solution Algorithms for Factored MDPs,” Journal of Artificial Intelligence Research, Vol. 19, 2003: 399 – 468. Howard, Ronald A. and James E. Matheson. “Influence Diagrams,” in The Principles and Applications of Decision Analysis, Vol. 2, Strategic Decisions Group, 1981, 719 –762. Howard, Ronald A., Mark Paich, Jim Smith (GM Onstar), Marcy Conn, and Jim Smith (Duke University), Panel Discussion: “Downstream Decisions (Options) and Dynamic Modeling,” INFORMS Meeting, Seattle, WA: October 25–28, 1998. Future Combat Systems (FCS) Command and Control (C2), Experiment 4 Phase 1 Analysis Report. September 2003. Jensen, Finn V. An Introduction to Bayesian Networks, Springer, NY, 1996. Kaelbling, L.P., M. L. Littman, and A. W. Moore, “Reinforcement Learning: A Survey,” Journal of Artificial Intelligence Research, Vol. 4, 1996: 237–285. Kjaerulff, U. “A Computational Scheme For Reasoning in Dynamic Probabilistic Networks,” in Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence, California: Morgan Kaufman Publishers, 1992, 121–129. Kobylski, G. C. 2005. Comparing Dynamic Decision Networks To Dynamic Programming. Ph.D. Dissertation. Departments of System Engineering and Mathematical Sciences, Stevens Institute of Technology, Hoboken, NJ, 42, 81, and 161. Netica [Computer Software]. Copyright 1990 – 98. Version 1.12. Vancouver, Canada: Norsys Software Corporation.

Military Operations Research, V13 N2 2008

Schenk, D. F., D. J. Bourgoine, and B. A. Smith. Unit of Action and Future Combat Systems-An Overview. Army Acquisition, Logistics & Technology (AL&T). Jan-Feb 2004, 4 – 6. Shachter, R. D. “An Ordered Examination of Influence Diagrams,” Networks, Vol. 20, 1990: 535–563. Shachter, R. D. “Evaluating Influence Diagrams,” Operations Research, Vol. 36, No. 4, November - December, 1986: 871– 882. Sutton, R. S. and A. G. Barto, Reinforcement Learning: An Introduction, The MIT Press, Cambridge, MA, 1998. Tatman, J. A. and R. D. Shacter. “Dynamic Programming and Influence Diagrams,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 20, March/April, 1990: 365–379. Glossary of Terms BCT BDA BFA BNs C2 CECOM DARPA DDN DP EV FCS IDs ISR LOS MP-ERM NLOS ORD RSV Robo SOSCOE UAV UGS

Brigade Combat Team Battle Damage Assessment Battlefield Functional Area Bayesian Networks Command and Control U.S. Army CommunicationsElectronics Command Defense Advanced Research Projects Agency Dynamic Decision Network Dynamic Programming Expected Value Future Combat Systems Influence Diagrams Intelligence, Surveillance, and Reconnaissance Line of Sight Multi-Purpose Extended Range Munition Non Line of Sight Operational Requirements Document Reconnaissance Surveillance Vehicle GSR Robotic Ground Surveillance Radar System of System Common Operating Picture Unmanned Air Vehicle Unmanned Ground System

Page 63

Operational Research in the RAF Air Ministry 218 pages, hard cover, text with some pictures/graphs Chapters: x Origins of Operational Research, 1918-1939 x Operational Research in Fighter Command, 1939-1943 x Direction of Operational Research and Parallel Developments in Other Services, 1940-1945 x Operational Research in Bomber Command, 1941-1945 x Operational Research in Coastal, Transport, and Flying Training Commands, 1941-1945 x Development of Operational Research Sections Overseas, 1942-1943 x Operational Research and the Allied Return to the Continent, 1943-1945 x Operational Research in the Mediterranean and Far Eastern Theatres of War, 1944-1945 x The Value of Operational Research in the War and Its Future Appendices x Scientists at the Operational Level (by Professor P.M.S. Blackett) x O.R.S. Reports (by Mr. H. Larnder, 1943) x The Sortie Raid Report Used by Bomber Command x The Examination of the Battlefield: Extract from the History of the Work of No. 2 O.R.S., Twenty-first Army Group, 1944-1945 Reprinted by MORS with permission of the Controller of Her Britannic Majesty’s Stationery Office 2003.

plus $5.00 freight per book (United States) or $10.00 freight per book (International)

COST $ 23.00

Please Send Me _____ copies of Operational Research in the RAF at $23 per copy plus $5 for freight ($10.00 International). Name _______________________________________________________________ Address_______________________________________________________________ _______________________________________________________________ City

__________________________________State__________Zip Code_______

† CASH

† CHECK

† Visa

† MC † AMEX

Credit card # ________________________________________________________ Billing Address: _____________________________________________________ __________________________________ Expiration date: __________________

VA Residents add 5% sales tax per copy.

#____ OR RAF x $23 =

$______

VA sales tax (5%)

$______

Freight

$______

TOTAL

$__________

Name on Credit Card (print): ___________________________________________ Phone Number: ______________________________________________________ Signature: __________________________________________________________

1703 N. Beauregard St x Suite 450 x Alexandria, VA 22311 x 703-933-9070 x Fax 703-933-9066 E-mail [email protected] 10/4/04

APPLYING VALUE-FOCUSED THINKING by Ralph L. Keeney Professor Keeney is a Research Professor of Decision Sciences at the Fuqua School of Business, Duke University. He has a B.S. in engineering from UCLA and a Ph.D. in operations research from MIT. Professor Keeney is the author of numerous published articles and several books on decision making. His recent book Smart Choices (Harvard Business School Press, 1999), coauthored with John Hammond and Howard Raiffa, is translated into seventeen languages. Professor Keeney received the Frank P. Ramsey medal for distinguished contributions in decision analysis and is a member of the National Academy of Engineering.

AVOIDING COMMON PITFALLS IN DECISION SUPPORT FRAMEWORKS FOR DEPARTMENT OF DEFENSE ANALYSES by Robin L. Dillon-Merrill, Gregory S. Parnell, Donald L. Buckshaw William R. Hensley, Jr., and David J. Caswell Dr. Robin Dillon-Merrill specializes in decision and risk analysis. The focus of her research is using programmatic risk analysis to improve operational and security management in complex, resource-constrained environments. She currently receives research funding from the National Science Foundation and NASA. She also serves as a risk expert on the National Academies Committee for the review of the New Orleans Regional Hurricane Protection Projects. Greg Parnell teaches decision analysis, systems engineering, and operations research at the United States Military Academy at West Point. He is also a Senior Principal in Innovative Decisions Inc. His research and consulting involve decision and risk analysis for military, intelligence and homeland security applications. He is currently Past President of the INFORMS Decision Analysis Society. He also is a Past President of MORS and a Fellow of the Society.

Military Operations Research, V13 N2 2008

Don Buckshaw is a Senior Decision Analyst with Innovative Decisions Inc. He has 18 years experience as an intelligence officer and operations research analyst. His expertise is the formulation and modeling of complex national defense and intelligence community problems that are both ill-defined and data deficient. One day he hopes to find some data. Bill Hensley is a Senior Associate with The Kenjya Group, Inc. Bill recently retired from the United States Army within over 20 years of experience as a Field Artillery Officer and as an Army Operations Research, Systems Analyst (ORSA). His expertise is in providing decision analysis support for strategic planning and programming to leadership and senior management with recommendations, formulation and modeling of complex national defense and intelligence community issues facing leaders at all levels. Bill is currently serving as a principle leadership consultant within the intelligence community. David Caswell is a Military Operations Analyst for the United States Air Force. He has 5 years of experience providing analysis for military decision makers. His most recent area of research has been in the modeling and decision support of intelligence community issues.

About our Authors

BRINGING VALUE FOCUSED THINKING TO BEAR ON EQUIPMENT PROCUREMENT by Jennifer Austin and Ian M Mitchell Jennifer Austin joined Defence Evaluation Research Agency (now known as the Defence Science and Technology Laboratory (Dstl)) in 1998 having obtained a degree in Computer Science having previously had a successful career as a Sales Manager. Since working in Operational Research she has, until 2005, mainly focussed on Logistics Analysis when she took up a post in the UK Ministry of Defence Head Office. Here, one of her responsibilities was running Decision Conferencing within the Equipment Capability Customer. Since returning to Dstl in 2007 Jennifer has become responsible for coordinating the Ground Manoeuvre research programme. Ian Mitchell has worked in Operational Research (OR) since 1988, following a flir-

Page 65

ABOUT OUR AUTHORS

tation with accountancy. For the Centre for Operational Research and Defence Analysis (CORDA) he initially produced historical data compilations. Studies of the land battle followed until 1992. After two years as an independent OR consultant to the UK Department of Social Security and European Space Agency he joined the Defence Research Agency at Fort Halstead in 1994. He managed the Battle Group War Game, and led infantry studies. He moved to Porton Down in 1998 managing OR studies until 2000 when he was seconded as the OR specialist for the Directorate of Equipment Capability, Nuclear Biological and Chemical (DEC (NBC)). As of 2004 he has supported studies of broader considerations of Chemical Biological Radiological and Nuclear (CBRN) defence. Since 2005 he has worked on naval OR studies. Ian served on the Council of the UK OR Society from 1994 to 2000 and 2002 onwards, being Vice-President from 2003 to 2005 and Treasurer as of 2007. He was commissioned into the Territorial Army in 1984 and was introduced to OR as part of a Business Studies degree during 1986.

THE USE OF DYNAMIC DECISION NETWORKS TO INCREASE SITUATIONAL AWARENESS IN A NETWORKED BATTLE COMMAND by Gerald C. Kobylski, Dennis M. Buede, John V. Farr, and Douglas Peters Gerald Kobylski is an Academy and Associate Professor in the Department of Mathematical Sciences at West Point. He earned a B.S. in Engineering from West Point in 1988, an M.B.A. from Western New England College in 1991, an M.S. in Operations Research from the Georgia Institute of Technology in 1997, an M.A. in National Security and Strategic Studies from the Naval War College in 2002, and a Ph.D. from the Stevens Institute of Technology in 2005. He is a registered Professional Engineer in the state of Virginia. He is currently the program director for MA103, Mathematical Modeling and Introduction to Calculus, the first course taken by most plebes at West Point.

Page 66

Dr. Dennis Buede is Executive Vice President of Innovative Decision, Inc., a consulting firm specializing in the use of decision and risk analysis techniques to the fields of systems engineering, intelligence analysis, and strategic planning. He has been a professor of systems engineering at George Mason University and Stevens Institute of Technology. His Ph.D. is in Engineering-Economic Systems from Stanford where he learned that communicating the results is the toughest part of any project. He has relearned this lesson many times since. John V. Farr is a Professor of Systems Engineering and Engineering Management and Associate Dean for Academics in the School of Systems and Enterprises at Stevens Institute of Technology. Before coming to Stevens in 2000, he was a Professor of Engineering Management at the United States Military Academy at West Point. He is a former past president and Fellow of American Society for Engineering Management and the American Society of Civil Engineers and a member of the Army Science Board and Air Force Studies Board of the National Academics. His is a former editor of the Journal of Management in Engineering and the founder of the Engineering Management Practice Periodical. He has authored over 100 technical publications including one textbook. Douglas J. Peters is a Principal Engineer with Applied Research Associates (ARA). As the leader of ARA’s Command and Control Concepts group, he led experimental design and analysis efforts for several DARPA programs. On the FCS C2 and MDC2 programs, Mr. Peters led portions of the analysis as well as ARA’s efforts to collect, query, and visualize experimental data. On the DARPA Real-Time Adversarial Intelligence and Decision-making (RAID) project, Mr. Peters was responsible for experimental design, data collection, information elicitation, and analysis. Also at ARA, Mr. Peters has led studies on Hard Target Uncertainties and the tunnels defeat portion of the Integrated Munitions Effects Assessment, both sponsored by DTRA. Mr. Peters is interested in modeling engineering phenomena and experimental metrics. Mr. Peters has degrees from The Pennsylvania State University and North Carolina State University. He is a registered Professional Engineer in the state of North Carolina.

Military Operations Research, V13 N2 2008

EDITORIAL POLICY The title of our journal is Military Operations Research. We are interested in publishing articles that describe operations research (OR) methodologies used in important military applications. We specifically invite papers that are significant military OR applications. Of particular interest are papers that present case studies showing innovative OR applications, apply OR to major policy issues, introduce interesting new problem areas, highlight educational issues, and document the history of military OR. Papers should be readable with a level of mathematics appropriate for a master’s program in OR. All submissions must include a statement of the major contribution. For applications articles, authors are requested to submit a letter to the editor⫺exerpts to be published with the paper⫺from a senior decision-maker (government or industry) stating the benefits received from the analysis described in the paper. To facilitate the review process, authors are requested to categorize their articles by application area and OR method, as described in Table 1. Additional categories may be added. (We use the MORS working groups as our applications areas and our list of methodologies are those typically taught in most OR graduate programs.)

Editorial Policy and Submission of Papers

INSTRUCTIONS TO MILITARY OPERATIONS RESEARCH AUTHORS The purpose of the “instructions to Military Operations Research authors” is to expedite the review and publication process. If you have any questions, please contact Ms. Corrina Witkowski, MORS Communications Manager (email: [email protected]). General Authors should submit their manuscripts (3 copies) to: Dr. Richard F. Deckro Military Operations Research Society 1703 N. Beauregard St, Suite 450 Alexandria, VA 22311-1717 Alternatively, manuscripts may be submitted electronically in Microsoft Word or Adobe Acrobat by emailing the manuscript and associated materials to [email protected] AND to [email protected]. Per the editorial policy, please provide: • • • •

authors statement of contribution (briefly describe the major contribution of the article) letter from senior decision-maker (application articles only) military OR application area(s) OR methodology (ies)

Approval of Release All submissions must be unclassified and be accompanied by release statements where appropriate. By submitting a paper for review, an author certifies that the manuscript has been cleared for publication, is not copyrighted, has not been accepted for publication in any other publication, and is not under review elsewhere. All authors will be required to sign a copyright agreement with MORS.

Military Operations Research, V13 N2 2008

Page 67

EDITORIAL POLICY AND SUBMISSION OF PAPERS

Abbreviations and Acronyms Abbreviations and acronyms (A&A) must be identified at their first appearance in the text. The abbreviation or acronym should follow in parentheses the first appearance of the full name. To help the general reader, authors should minimize their use of acronyms. If required, a list of acronyms can be included as an appendix.

TABLE 1: APPLICATION AREAS & OR METHODS Composite Group

APPLICATION AREA

I. STRATEGIC & DEFENSE

Strategic Operations Nuclear Biological Chemical Defense Arms Control & Proliferation Air & Missile Defense

II. SPACE/C41SR

Operational Contribution of Space Systems Battle Management/Command and Control ISR and Intelligence Analysis Information Operations/Information Warfare Countermeasures Military Environmental Factors

OR METHODOLOGY Deterministic Operations Research Dynamic Programming Inventory Linear Programming Multiobjective Optimization Network Methods Nonlinear Programming

Probabilistic Operations Research Decision Analysis Markov Processes Reliability Simulation Stochastic Processes Queuing Theory

III. JOINT WARFARE

Unmanned Systems Land & Expeditionary Warfare Littoral Warfare/Regional Sea Control Strike Warfare Air Combat Analysis & Combat ID Special Operations and Irregular Warfare Joint Campaign Analysis

IV. RESOURCES

Mobility & Transport of Forces Logistics, Reliability, & Maintainability Manpower & Personnel

Applied Statistics Categorical Data Analysis

V. READINESS & TRAINING

Readiness

Forecasting/Time Series

Analytical Support to Training Casualty Estimation and Force Health Protection

Multivariate Analysis

VI. ACQUISITION

VI. ADVANCES IN MILITARY OR

Neural Networks

Measures of Merit Test & Evaluation Analysis of Alternatives Cost Analysis Decision Analysis

Nonparametric Statistics Pattern Recognition Response Surface Methodology

Modeling, Simulation, & Wargaming

Others

Homeland Defense and Civil Support Computing Advances in Military OR Warfighter Performance and Social Science Methods Warfighting Experimentation

Advanced Computing Advanced Distributed Systems (DIS) Cost Analysis Wargaming

Length of Papers Submissions will normally range from 10 –30 pages (double spaced, 12 pitch, including illustrations). Exceptions will be made for applications articles submitted with a senior decision-maker letter signed by the Secretary of Defense.

Page 68

Military Operations Research, V13 N2 2008

EDITORIAL POLICY AND SUBMISSION OF PAPERS

Format The following format will be used for dividing the paper into sections and subsections: TITLE OF SECTIONS The major sections of the paper will be capitalized and be in bold type. Title of Subsections If required major sections may be divided into subsections. Each subsection title will be bold type and be Title Case. Title Subsection of a Subsection If required subsections sections may be divided into subsections. Each subsection title will be Title Case. Bold type will not be used. Paper Electronic Submission with Figures, Graphs and Charts After the article is accepted for publication, an electronic version of the manuscript must be submitted in Microsoft Word or Acrobat. For each figure, graph, and chart, please include a camera-ready copy on a separate page. The figures, graphs, and tables should be of sufficient size for the reproduced letters and numbers to be legible. Each illustration must have a caption and a number. Mathematical and Symbolic Expressions Authors should put mathematical and symbolic expressions in Microsoft Word or Acrobat equations. Lengthy expressions should be avoided. Footnotes We do not use footnotes. Parenthetical material may be incorporated into a notes section at the end of the text, before the acknowledgment and references sections. Notes are designated by a superscript letter at the end of the sentence. Acknowledgments If used, this section will appear before the references. References References should be cited with the authors and year. For example, one of the first operations research texts published with several good military examples (Morse & Kimball, 1951). References should appear at the end of the paper. The references should be unnumbered and listed in alphabetical order by the name of the first author. Please use the following format: For journal references, give the author, year of publication, title, journal name, volume, number, and pages—for example: Harvey, R.G., Bauer, K.W., and Litko, J.R. 1996. Constrained System Optimization and Capability Based Analysis, Military Operations Research, Vol 2, No 4, 5–19. For book references, give the author, year of publication, title, publisher, and pages—for example: Morse, P.M., and G.E. Kimball. 1951. Methods of Operations Research. John Wiley, 44 – 65. For references to working papers or dissertations cite the author, title, type of document, department, university, and location, for example: Rosenwein, M. 1986. Design and Application of Solution Methodologies to Optimize Problems in Transportation Logistics. Ph.D. Dissertation. Department of Decision Sciences, University of Pennsylvania, Philadelphia. Appendices If used, this section will appear after the reference.

Military Operations Research, V13 N2 2008

Page 69

MILITARY OPERATIONS RESEARCH SOCIETY BOARD OF DIRECTORS OFFICERS President John F. Keane* Johns Hopkins University/APL [email protected]

Vice President Professional Affairs (PA) Dr. Niki C. (Deliman) Goerger United States Military Academy [email protected]

President Elect LTC Michael J. Kwinn* Jr United States Military Academy [email protected]

Secretary William M. Kroshl Johns Hopkins University/APL [email protected]

Vice President Finance and Management (F&M) Joseph C. Bonnet* The Joint Staff [email protected]

Immediate Past President Patrick J. McKenna* USSTRATCOM/J821 [email protected]

Vice President Meeting Operations (MO) Kirk A. Michealson* Lockheed Martin [email protected]

Executive Vice President Brian D. Engler* Military Operations Research Society [email protected]

* - Denotes Executive Council Member

OTHER DIRECTORS Dr. Patrick D. Allen, General Dynamics, UK Ltd. Col John M. Andrew, USAFA/DFMS Lt Col Andrew P. Armacost, HQ AFSPACE/A9 Dr. Michael P. Bailey, MCCDC Col Suzanne M. Beers, AFOTEC Det 4/CC Dr. Theodore Bennett, Jr., Naval Oceanographic Office Mary T. Bonnet, AD, Innovative Management Concepts, Inc. Dr. W. Forrest Crain, AD, Director Capabilities Integration, Prioritization and Analysis William H. Dunn, AD, Alion Science and Technology Helaine G. Elderkin, FS, AD, Computer Sciences Corporation Dr. Karsten Engelmann, Center for Army Analysis John R. Ferguson, AD, SAIC Dr. Mark A. Gallagher, HQUSAF/A9R Michael W. Garrambone, General Dynamics Debra R. Hall, General Dynamics LTC Clark H. Heidelbaugh, Joint Staff, J7 Robert C. Holcomb, Institute for Defense Analyses Timothy W. Hope, AD, Whitney Bradley & Brown, Inc. Dr. John R. Hummel, Argonne National Lab Gregory T. Hutto, 53rd TMG/OA

Maj. KiraBeth Therrien, SAF/US Gregory A. Keethler, AD, Lockheed Martin Jane Gatewood Krolewski, USAMSAA Dr. Lee Lehmkuhl, MITRE Trena Covington Lilly, Johns Hopkins University/APL Dr. Andrew G. Loerch, AD, George Mason University Dr. Daniel T. Maxwell, Innovative Decisions Dr. Willie J. McFadden II, AD, Booz Allen Hamilton Lana E. McGlynn, AD, McGlynn Consulting Group Dr. Gregory A. McIntyre, Applied Research Associates, Inc. Terrence McKearney, The Ranger Group Anne M. Patenaude, DUSD(R)/RTPP Dr. Steven E. Pilnick, Naval Postgraduate School Mark D. Reid, AD, MITRE Cortez D. (Steve) Stephens, AD, MCCDC Donald H. Timian, Army Test and Evaluation Command Eugene P. Visco, FS, AD, Consultant Corinne C. Wallshein, OSD/PA&E COL Richard I. Wiles, USA-ret, FS, AD

MILITARY OPERATIONS RESEARCH SOCIETY 1703 N. BEAUREGARD STREET SUITE 450 ALEXANDRIA, VA 22311 Telephone: (703) 933-9070