Vol 15 - no 4 - 2013

7 downloads 8106 Views 6MB Size Report
Apr 14, 2010 ... airlines grounded all their Boeing 787 planes in response to a perilous emergency landing of one of their 787s ( e. Guardian 2013). Besides ...
‘Au service de l’analyse’ — since 1998

Vol. 15 | no 4 | 2013

the

Network Industries quarterly

Critical Infrastructure Protection in the Network Industries

ISSN 1662-6176

contents R

ecent disasters evidence that failures in key infrastructures may result in severe economic, environmental, and social losses, even human casualties. Therefore, it’s not a surprise that governments worldwide are increasingly concerned about robustness and resiliency of national and regional infrastructures to natural disasters, operational accidents and other disruptive events. The mounting political attention has lead to numerous national and regional initiatives that aim to identify, designate, and protect critical infrastructures that underpin our daily life. This issue of the Network Industries Quarterly is dedicated to the theory and the practice of critical infrastructure protection (CIP). The issue’s articles show that the network industries produce many products and services that are vital for the modern societies, and that these industries are vulnerable to disruptions. I hope you’ll enjoy reading this theme issue and find the articles interesting and though-provoking. Toni Männistö, Guest Editor EPFL Chair MIR

3 The Role of Asset Management in Reducing the Risk of Catastrophic Infrastructure Failure Richard G. Little

8 Organizational Capabilities as Critical Factor for Infrastructure Service Provision

Hagen Worch, Mundia Kabinga, Anton Eberhard, Jochen Markard, and Bernhard Truffer

12 How can space based infrastructure and assets contribute to the monitoring of global threats?

Pierre-Alain Schieb, Claire Jolly, and Barrie Stevens

16 The Postal and Courier Services as Critical Infrastructure Toni Männistö

20 What are the pre-requisites for managing a critical infrastructure such as Air Traffic Management?

Marc Baumgartner, Valerie November, and Anthony Smoker

26 Why do we still have major accidents? – Lessons learnt from the chemical industry Richard Gowland

29 Establishing the National Critical Infrastructure Inventory in the

Context of the Swiss Critical Infrastructure Protection Programme Stefan Brem

33 Conferences 36 Announcements

P.S.: If you are interested in contributing to one of the forthcuoming issues, please send an email to: Network Industries Quarterly | Published four times a year, contains information about postal, telecommunications, energy, water, transportation and network industries in general. It provides original analysis, information and opinions on current issues. e editor establishes caps, headings, sub-headings, introductory abstract and inserts in articles. He also edits the articles. Opinions are the sole responsibility of the author(s). Subscription | e subscription is free. Please do register at to be alerted upon publication. Letters | We do publish letters from readers. Please include a full postal address and a reference to the article under discussion. e letter will be published along with the name of the author and country of residence. Send your letter (maximum 450 words) to the editor-in-chief. Letters may be edited. Publication directors | Matthias Finger, Rolf Künneke Guest Editor | Toni Männistö Web-manager and Design | Mohamad Razaghi Founding editor | Matthias Finger Publishers | Chair MIR, Matthias Finger, director, EPFLCDM, Building Odyssea, Station 5, CH-1015 Lausanne, Switzerland (phone: +41.21.693.00.02; fax: +41.21.693. 00.80; email: ; web-site: ISSN

1662-6176

Published

in

Switzerland

The picture on the front page was taken from Wikipedia, licensed under the Creative Commons Attribution 2.5 Generic license.

Dossier

The Role of Asset Management in Reducing the Risk of Catastrophic Infrastructure Failure Richard G. Little* Abstract Age and poor condition evidently play a role when critical infrastructure fails with catastrophic consequences. This article discusses how principles of asset management can help make the aging infrastructure safer.

Whenever a major piece of critical infrastructure fails, usually spectacularly with loss of life and high economic costs, the question is always raised whether excessive age and poor condition were to blame. Age and condition certainly played a role in the failure of the New Orleans levees in 2005, the San Bruno, California natural gas pipeline explosion in 2010, and a spate of U.S. bridge collapses in the 1980s. Not surprisingly, in the aftermath of such incidents, calls for increased expenditures to “restore the infrastructure” are widely heard. However, is it really as simple as that? Even if it is indeed true that these failures resulted from the poor condition of the infrastructure, was the condition directly related to age, and if so, what can we do to materially reduce the risk of failure. These and similar questions have occupied the attention of infrastructure managers for many years and despite millions spent on research and billions spent on maintenance and renewal, failures continue to occur. This article suggests that good asset management practices coupled with the political will and institutional capacity to implement them broadly can reduce the risk of failure in major infrastructure systems. Typically, following a major failure, an independent investigative panel is convened that delivers a lengthy and detailed report on the causes of the event and proposes a range of solutions to prevent the problem from reoccurring in the future. However, failures continue to occur, and if not a copy of the previous event, they are sufficiently similar to suggest that the root causes have not been addressed. The major shortcoming of this approach is that it focuses on failure as an isolated event rather than a systemic problem. As a result, after a failure, a long list of fixes, usually aimed at “getting the engineering right” are bolted on to systems with underlying flaws that remain largely uncorrected.

There are many reasons for this but a major factor appears to be the assumption that “well designed” systems are inherently safe and causes for failure must be extraordinary anomalies that lie outside the system itself and cannot be controlled. For example, infrastructure systems are designed to survive a broad array of hazards such as earthquakes, extreme winds, floods, snow and ice, volcanic activity, landslides, tsunamis, and wildfires, as well as terrorism and sabotage. The design paradigm for addressing these threats generally focuses on their first order effects. That is, designing the physical systems not to fail under normal loads and operating conditions and to withstand extreme loads caused by natural hazards and malevolent acts. While valid up to a point, this approach does have shortcomings. It implicitly assumes that the “maximum probable event” to which the infrastructure will be subjected is known and its effects can be predicted with some degree of accuracy. Experience has shown that this is often not the case. In addition, little if any direct consideration is given to the threat posed by aging materials, inadequate maintenance, and excessively prolonged service lives. These factors constitute a threat in their own right that can be as detrimental to the performance of infrastructure as a natural hazard event but are more insidious because they are passive and evolve slowly over time. When otherwise adequately designed systems have been weakened by excessive age or inadequate maintenance, they also become vulnerable to otherwise survivable events. The Risk of Infrastructure Failure Risk is a useful analytical concept to put these various threats in context. It is typically expressed as the probability of an adverse event multiplied by the consequences should that event occur, or R = P x C. The nature and

* Visiting Research Scholar, Department of Industrial and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY. [email protected]. This article was adapted from “Managing the Risk of Aging of Public infrastructures” presented at the IRGC expert workshop on Public Sector Governance of Emerging Risks, September 17-18, 2012 in Lausanne, Switzerland. Network Industries Quarterly | vol.| vol. 15 | n15o 3| n|o2013 Network Industries Quarterly 4 | 2013 3

dossier

magnitude of the risk can be assessed by means of three questions (Kaplan and Garrick 1981) 1. What can go wrong? Infrastructure failures can range from the merely annoying (a brief power outage that requires resetting digital clocks) to the decidedly catastrophic (the failures of the New Orleans levees during Hurricane Katrina that resulted in the deaths of more than 1000 people and billions of dollars in damage). Fortunately, most infrastructure failures are clustered at the lower end of the consequence scale but notable exceptions such as New Orleans do occur and need to be minimized to the extent possible. 2. What is the likelihood that it could go wrong? Unlike light bulbs or electric motors, infrastructure systems do not follow straightforward models where the mean time to failure can be determined and the corresponding probability of failure calculated with a reasonable degree of accuracy. Deterioration models of the physical systems (electric grid, pipeline networks, roadway pavements) based on age, materials, environmental conditions, degree of maintenance, etc.) have been developed and are a useful aid to understanding the physical behavior of these systems and under what conditions failure is more (or less) likely to occur. However, they still have far to go in predicting with confidence the actual probability of failure. Despite this, expert judgment and other subjective methods can be used to develop reasonable estimates of failure probabilities. 3. What are the consequences of failure? Even though hurricanes may be “acts of God” and not preventable, almost all of the destruction and death that occurred in New Orleans in 2005 was caused by the failure of old and poorly maintained levees, not directly by the hurricane itself. We can only speculate how better prepared New Orleans might have been if instead of assurances that all was well, a widely-disseminated flood risk assessment had read: In the event of a stronger than usual but not uncommon intensity hurricane, it is highly likely that the levees will be breached or otherwise fail in a number of locations with the result that hundreds to thousands of mostly poor people will perish and damage in the billions of dollars will accrue. The answers to these questions provide some insight into what can be done in the context of governance and decision-making to reduce the risks from aging and poorly maintained infrastructure. How governments respond to this risk will vary, but at the very least actions should

include the development of comprehensive and robust policies that will guide investment management decisions based on system vulnerability (including age) and aimed at reducing the risk of devastating systemic failures. The empirical evidence certainly suggests that the risk of devastating systemic failures must be placed on governmental agendas; market forces alone will not be sufficient to drive individual system operators to take action of their own accord. Regulation, because of its inflexibility, should be used sparingly but remain an option particularly when serious injury and loss of life is a potential consequence of failure. The Role of Asset Management in Risk Reduction One of the primary objectives of infrastructure asset management is to maximize the effective life of the system and its components within the constraints of available resources. If asset management practices are superimposed on the risk assessment exercise above, we can see that it can play a meaningful role in reducing the risk of failure due to deterioration, by reducing the modes of failure (“What can go wrong?”), failure probability (“What is the likelihood that it could go wrong?”) and the consequences of failure. A good program of infrastructure asset management can beneficially influence all three elements of the risk equation. As it is an underlying premise of this discussion that infrastructure age has the potential to increase the likelihood of failure and consequently, the risk of such failure, it will be instructive to spend a few moments considering the validity of that assumption. In and of itself, the age of a given piece of infrastructure does not appear to be the primary driver in determining the risk of infrastructure failure—it is neither necessary nor sufficient for failure to occur. As a case in point, the United States experienced three significant bridge collapses in the 1980s; the I-95 Mianus River Bridge in Connecticut, the I-87 - Schoharie Creek Bridge in New York, and the US 51 - Hatchie River Bridge in Tennessee. Two of the bridges had been in place for less than 30 years and the Hatchie River Bridge was 54 years old. By contrast, the Brooklyn Bridge (1883), George Washington Bridge (1931), and Golden Gate Bridge (1937) are still in service today. What appears to be the more significant risk factor (at least in the cases of these three collapsed U.S. bridges) is the lack of adequate and timely inspection, maintenance, and repair (NTSB 1984, 1988, 1990). This is a key point to improving our understanding of the risk of infrastructure failure and the adoption of better informed ex ante policies, guidelines, and regulations to reduce that risk.

Network Industries Quarterly | vol. 15 | no 4 | 2013

4

dossier

Figure 1 illustrates how the life of an infrastructure asset can be prolonged through timely and appropriate maintenance and also shows that a consequence of inadequate maintenance is premature aging and the loss of value resulting from the decreased service life as well as the increased potential for failure. However, the search for an optimal asset management investment strategy to capture this value has been elusive and remains something of a Holy Grail to infrastructure managers community and rightly so. Each year, the equivalent of tens of billions of dollars are spent globally on maintenance and repair activities in an effort to maintain satisfactory performance levels for these systems. Public agencies and private corporations alike grapple with the question of how much should they spend to maintain their infrastructure assets against the possibility of a serious breakdown or loss of service capacity while at the same time, wonder if they are spending too much. The desire is, of course, to avoid spending more than necessary while at the same time, avoiding calamitous outcomes, (e.g., lengthy road or bridge closures, catastrophic failure, etc.). This challenge is illustrated conceptually in Figure 2 where it can be seen that the optimal asset management strategy seeks to position the vertical line in the decision table so that the risk of both Type I errors (not doing needed maintenance) and Type II errors (doing excessive maintenance) is balanced within the risk tolerance of the decision-makers either for an increased number of failures or higher expenditures for doing unneeded maintenance. There is no single right answer here: the level of investment will be strongly influenced by the decision-makers’ appetite for risk!

Figure 1. The Role of Maintenance and Repair in Extending

The Service Life of Infrastructure and Reducing the Risk of Failure (NRC 1993) Modern management culture rewards efficiency and speed of operation. Thus, there is little market incentive to adopt safety practices that may take longer and cost more, and which are difficult to measure. On the other hand, while we usually know whether a system has failed or not, we

rarely if ever know how close and how frequently it approaches a failure point. As a result, organizations and the bodies that regulate them tend to assume a level of safety that may or may not exist. When failure does not occur, these assumptions are reinforced with the result that safety margins are often reduced on no other basis than the system has not failed. The incentive of measurable financial benefits from reduced safety precautions (inspections, testing, maintenance, etc.) against an unmeasurable (or at least unmeasured) level of safety usually drives decisionmaking. When this behavior becomes ingrained in organizational culture it is very difficult to implement alternative courses of action.

Figure 2. Asset Management Strategies Must Balance Risk and Cost

The major lesson that should be taken from this is that complex infrastructure systems are not inherently safe, no matter how well designed. The reason for this is that the systems are designed first and foremost to produce a service, be it electric power or flood defense, at a reasonable cost; not to be safe on their own account. Events such as New Orleans are far from unique and the recurrence of similar institutional and human factors as underlying root causes in other failures suggests that a new paradigm for addressing the risks of high-consequence infrastructure failures is called for. Rather than seeking an optimal design solution based on an expected maximum probable demand or the return period of a natural hazard event, a more holistic way of addressing these risks may be to assume that a failed condition is actually the stable configuration of the system. If entropy is thought to govern system behavior, then continuous inputs of financial and intellectual capital will be required to keep the system in an unstable, lower entropy regime where “safe” exists. By recasting the problem as one of proactively achieving o o Network Industries Quarterly | vol.| 15 3| |n2013 Network Industries Quarterly vol.| n15 4 | 2013 5

dossier

safety rather than defensively preventing failure, such investments take on a wholly different meaning and can no longer be viewed as optional. Without on-going analysis, assessment, planning, testing, maintenance, and repair, the system will revert to its most stable configuration, i.e., failure. A Way Forward It is obvious from the preceding discussion that just knowing what to do about the risk of aging infrastructure is not enough. There must also be a political will to act and institutional frameworks and organizational capacity to develop and implement appropriate policies, i.e., “How to do it.” Recognition by governing bodies of a problem of national significance that needs to be addressed is imperative. Stakeholder groups must also “buy-in” to the approach. For example, in the Netherlands, government has managed an existential risk of flooding in an acceptable manner during the 60 years since the floods of 1953 and people generally believe that government will continue to be up to the task. On the other hand, the relatively low awareness of risk and unwillingness to address it at a national level in the U.S. may be a result of both geography and governance. The U.S. is very large and it is very difficult to identify risks that affect all parts of the nation on a similar basis. Governance in the U.S. is based on a federal, not a national, system. Policy and decision-making is mostly relegated to the 50 states which have widely differing risks, priorities, and the means to address them. Much of the responsibility for actually funding infrastructure improvements is further delegated to local governments. Perhaps under such conditions, the reactive posture of the U.S. to hazards and risk is not surprising. However, we can begin to draw up a list of actions that could form the basis for “how to” create an environment conducive to better asset management and overall risk reduction. • Make risk management an enterprise goal for governments and infrastructure agencies. The adoption of foundational documents such as ISO 31000 would provide a basis for sustained action. • Adopt and promulgate infrastructure risk reduction as core values through all levels of the responsible organization. The Dupont Corporation has for years held safety on an equal footing as profitability and no one is exempt. Cultures can change. • Develop broad stakeholder support for risk acceptance and collective action through meetings and dialogue at all governmental levels. The benefits of risk management activities must be understood if they are to be supported by the public. • Hold management accountable for organizational 6

Network Industries Quarterly | vol. 13 | no 3 | 2011

risk performance; good performance should be rewarded and poor performance corrected. • Develop the necessary funding sources and financing strategies for asset management and risk reduction. Water boards in the Netherlands fund flood defense mostly with locally generated taxes and fees. Local solutions are possible. • Continue to expand our understanding of how infrastructure age and condition affects its performance and risk of failure. Advances in sensor technology and the processing of massive amounts of data offer many opportunities to improve asset management and reduce risk. There should be no expectation for a universal solution to this problem. What is possible in a small country like the Netherlands, that faces a well-recognized threat from the sea, is quite different from what can occur in the much larger and broadly diverse United States. A strong government in Singapore can compel national actions unthinkable in the UK. However, despite the challenges of addressing a global issue at the local level, there are many lessons to be learned from what we know of good risk and asset management practices that can reduce the risk of catastrophic failure. References Kaplan, S. and B. J. Garrick, 1981. “On the Quantitative Assessment of Risk,” Risk Analysis 1(1):11–27. National Research Council. 1993. The Fourth Dimension in Building: Strategies for Avoiding Obsolescence. Washington, D.C.: National Academy Press. National Transportation Safety Board, 1984. Collapse of a Suspended Span of Route 95 Highway Bridge over the Mianus River, Greenwich, Connecticut, (HAR-84/03) National Transportation Safety Board, 1988. Collapse of New York Thruway (I-90) Bridge, Schoharie Creek, near Amsterdam, New York, (HAR-88/02) National Transportation Safety Board, 1990. Collapse of the Northbound U.S. Route 51 Bridge Spans over the Hatchie River near Covington, Tennessee, (HAR-90/01), National Transportation Safety Board, Washington, D.C.

About the author Richard G. Little is a private consultant in infrastructure policy and a Visiting Research Scholar in the Department of Industrial and Systems Engineering at Rensselaer Polytechnic Institute working on issues of disaster preparedness and community resilience. He was formerly Director of the Keston Institute for Public Finance and Infrastructure Policy in the Price School of Public Policy at the University of Southern California. Prior to joining Network Industries Quarterly | vol. 15 | no 4 | 2013

6

dossier

USC, he was Director of the Board on Infrastructure and the Constructed Environment of the National Research Council (NRC) where he directed a program of studies in building and infrastructure research. Mr. Little has over forty years’ experience in planning, management, and policy development relating to civil infrastructure, including fifteen years with local government. His comments and

positions on infrastructure and public finance issues appear regularly in the New York Times, Wall Street Journal, and Financial Times. He has been certified by examination by the American Institute of Certified Planners and was elected to the National Academy of Construction in 2008. He received an M.S. in Urban-Environmental Studies from Rensselaer Polytechnic Institute.

Network Industries Quarterly | vol. | 2013 Network Industries Quarterly | vol. 1315 | n|on3o 4| 2011

7

Dossier

Organizational Capabilities as Critical Factor for Infrastructure Service Provision Hagen Worch*, Mundia Kabinga**, Anton Eberhard**, Jochen Markard***, and Bernhard Truffer**** Abstract This research examines underlying reasons for performance deficiencies in electricity utilities from the capability perspective. The findings indicate that sector reform processes affect the capability structures of utilities considerably and that the loss of capabilities results in performance deficiencies in power supply. Therefore, policy makers need to attend more carefully to the capability structures in infrastructure sectors when implementing new policies and regulations.investment. We present economic insights and review recent cases on the implementation of such regulatory intervention.

I. Background Providing reliable, secure, cost-efficient and environmentally sustainable electricity services is a central challenge in most economies worldwide. Electricity sector reform processes, including liberalization and privatization, have been initiated in many countries to tackle these challenges. However, numerous infrastructure sector reform processes are incomplete, have been implemented much slower than expected, experienced resistance from sector players, or were even reversed. The underlying reasons for these drawbacks are not entirely clear. This situation asks for a more detailed look into the reform processes and requires a potential revision of the conceptual frameworks energy policy decision-makers and scholars are working with. Conventional explanations of failures in the electricity supply focus on the sector’s regulatory framework and whether it provides sufficient incentives for investments into power plant capacities, adequate generation technologies and the infrastructure networks. We argue that organizational capabilities are another critical factor for infrastructure service provision, which – surprisingly – has received little systematic attention by policy makers and scholars alike. Our findings suggest that electricity sector reforms and regulatory changes lead – under certain circumstances – to a loss of critical competences at the utility firm level, which not only worsens planning, building, operation and maintenance procedures but also makes swift reactions to new challenges and crisis situations at the sector level difficult. More generally, we argue that sector reforms and regulatory changes can have a far-reaching impact on the organizational capabilities

of utilities. Once these capabilities are lost they may be very hard to regain. With this ‘capability perspective’ we complement traditional theoretical explanations of utility and sector performance. II. What is a Capability Perspective? Organizational capabilities have been widely studied in the management literature to better understand the performance of firms, especially in situations where tasks are highly complex or market environments are changing rapidly. Organizational capabilities enable a firm to execute tasks such as production, marketing and product development. Organizational capabilities develop over time and depend, among others, on the competences, skills and experiences of the employees of a firm. Ideally, the organizational capabilities of a firm are well adapted to the key tasks it has to perform. When tasks change (e.g., due to changes in the market environment) or the capability structure changes (e.g., due to well-functioning teams leaving the firm), a capability gap might occur and negatively affect firm performance. Both effects can occur simultaneously or independently. A capability gap is an insufficient availability of competences, skills and experiences for a specific organizational task. A major challenge in such a situation is that capability gaps are not always easy to identify and often take quite some time to be resolved, if at all. Capability gaps tend to be persistent in situations, in which the adaptation of existing capabilities or the development of new capabilities is time consuming, complex and poorly understood. This is particularly the case, if the lost capabilities comprise longterm experiences and tacit knowledge.

*Swiss Distance University of Applied Sciences/ Fernfachhochschule Schweiz Institute for Management and Innovation; [email protected] ** University of Cape Town Graduate School of Business, Management Programme in Infrastructure Reform & Regulation; [email protected] *** Swiss Federal Institute of Technology Zurich, Department of Management, Technology and Economics, Chair of Sustainability and Technology Weinbergstrasse ; [email protected] **** Eawag – Swiss Federal Institute of Aquatic Science and Technology, ESS – Environmental Social Sciences; [email protected] This article is based on an empirical study presented in the research paper “Why the Lights Went Out: A Capability Perspective on the Unintended Consequences of Sector Reform Processes” Network Industries Quarterly | vol. 15 | no 4 | 2013

8

dossier

Figure 1: Capability perspective on utility and sector performance

In the case of electricity utilities, key tasks include planning, building, operating, and maintaining power plants and the network infrastructure. Applying a capability perspective for the analysis of electricity sectors enables us to explicitly link changes in the regulatory environment to the emergence of capability gaps in utility firms. Unbundling is an example in which regulation directly affects the organizational structures of electricity utilities with potentially negative consequences for the existing capabilities of the affected organizational units. Market liberalization is an example where regulatory changes lead to the emergence of new tasks such as marketing, power trading and balance group management, for which new organizational capabilities are required. The result of a capability gap is a performance decline or even complete failure of specific organizational tasks, and eventually of the organization as such. If existing capabilities are weakened or get lost because of regulatory changes, this may result in a decline of organizational performance. Regulatory interventions can also change the tasks utilities have to fulfill. If tasks change, new capabilities will be required, which means that they have to be developed in order to fulfill the tasks. A decline in organizational performance again may have repercussions at the sector level, especially if all utility firms are affected in a similar way or if very critical firms, such as single suppliers, are affected. Figure 1 depicts the capability perspective on utility and sector performance III. The Emergence of Capability Gaps: The Case of Sector Reforms in South Africa Between 2005 and 2008, South Africa experienced a series of major electricity blackouts with serious implications for residential and industrial electricity customers and the economy as a whole. Eskom, the national electricity supplier, had to launch emergency measures such as scheduled load-shedding and the government set up a task force, new regulations and ad hoc energy saving programs. Despite these interventions, the country still suffers from a poorly performing power sector with the grid and power plants working at their limits and a high risk of power outages due to a critically tight reserve margin.

The electricity crisis in South Africa is commonly explained by insufficient generation capacity, badly maintained power plants, insufficient coal quality and a weak electricity grid. But how did such a situation occur? How could a 40% reserve margin for power generation in 1991 turn into an estimated capacity shortfall of 10% in 2008? Why were existing power plants in such a bad shape? How could a supposedly experienced company like Eskom buy below specification coal and allow coal stocks to fall to unacceptable levels? And why does it take years to ameliorate the situation despite early interventions by the government and management? Our empirical findings suggest that a central underlying reason for the inadequate operation and maintenance of Eskom’s power plants was a dramatic lack of competences and skills. We identified several factors that either caused the loss of highly experienced engineering, technical and managerial capabilities, or significantly changed the nature of the tasks Eskom had to perform. Three public policy reform programs had particularly severe unintended effects on Eskom’s capability structure. They caused the loss of capabilities due to long-term personnel leaving the utility and due to the change of tasks that required new capabilities that were not readily available. The first major reform process of South Africa’s electricity sector we identified was the commercialization of Eskom. The aim was to establish a more commercially professionalized organization with adequate management and control structures. The commercialization process had through various channels a substantial impact on Eskom’s organizational capabilities as highly experienced personnel left the organization. Long-term experience in planning, building, operating and maintaining power plants diminished. As a result, operators, operation managers and maintenance managers were not able to run the plants adequately. The second major sector reform was the government’s implementation of a new regulatory framework, which was intended to open the wholesale market for more private investment with the corresponding government decision that prohibited Eskom to build new generation capacity. The change of the regulatory framework can Network Industries Quarterly | vol. 15 | no 4 | 2013

9

dossier

be characterized as an attempt to partly liberalize the electricity sector. The already critical situation in South Africa’s electricity sector further exacerbated. Specifically the government decision to stop Eskom from building new capacities resulted in changing the nature of operating and maintaining power plants considerably because with an increasing electricity demand the reserve margins declined tremendously. As an unintended consequence, the plants had to be run harder, which in turn generated additional requirements for competences and skills that were already scarce. This increased the capability gap even more. Finally, socio-political transformation processes, which included policies to ensure and enforce equal opportunity employment practices in South Africa, further amplified the high employee turnover caused by the sector reform processes. The way the employment equity and affirmative action policies were implemented exacerbated an already perilous problem within Eskom around operational experience. In fact, accelerated retirement and resignation of key staff affected Eskom’s capabilities critically. As the power stations had already limited cadre of experienced operators, operations managers and maintenance managers due to high employee losses in the commercialization process, the remaining staff was partly replaced with relatively less experienced staff from previously disadvantaged backgrounds. Consequently, the available experience to perform the organizational tasks diminished further and critical competences and skills were lost. The subsequent promotion of young and relatively inexperienced operators and managers eroded the level of experience available to perform the operations and maintenance tasks and, ensured that it remained persistently low over time. It is important to highlight that it was less the engineering qualifications that were lost, rather engineering experience. Regarding qualifications, the incoming young cohort of engineers were highly qualified and brought state-of-the-art engineering and management knowledge into Eskom. Summing up, the different public policy reforms caused a substantial loss of competences and skills across all organizational functions within Eskom. And these were the ultimate reasons why the blackouts occurred and why they could not be remedied within an acceptable time span. Certain positions within the power stations’ operations and maintenance departments could not be filled anymore and as a consequence the power plant staff was increasingly unable to comprehend what was happening across the integrated generation processes and failed to strictly adhere to the specific plant operation and maintenance procedures – specifically under tightened reserve margins. In addition to the immediate impact on the capability structure, the loss of experienced staff had long-lasting consequences. Many positions remained vacant because the acquisition and build-up of professional experience takes time in this sector. Succession programs have to be planned and implemented over considerable time spans. Thus, the long-term experience in operating and maintaining power

plants, which the ex-staff had accumulated, could neither be replaced by young professionals nor by externally hired experts in the short term. Furthermore, little emphasis was placed on programs to maintain capabilities in the beginning of the reform processes. This was because changes in the policy and regulatory environment had been hardly considered by policy makers as having a relevant influence on the utility’s capability structure and performance. More generally, the trade-off between the necessity of sector reforms and transformational policies on the one hand, and the impact on the utilities’ capability structure on the other hand, has received very limited attention. As a consequence, it became increasingly difficult for Eskom to balance demand and supply. Between 2005 and 2008, South Africa experienced several blackouts and significant load-shedding. Moreover, Eskom was unable to keep its existing power plants working adequately. The electricity crisis caused costly damage to the economy and a substantial loss of welfare to electricity consumers. For example, there was an extensive period of power rationing and even mines were forced to close for certain periods. Although the situation had been stabilized since 2008, the situation of South Africa’s electricity system is still critical with electricity supply substantially strained in peak hours and an ongoing risk of power outages. IV. Policy Implications Inferring from these insights, we draw two conclusions. First, the capability dimension is an important, but neglected and little understood dimension in infrastructure sector regulation in both research and practice. Second, and as a consequence, there is little knowledge about the specific challenges that emerge at the organizational level due to sector reforms and regulation, and how this affects the performance of utilities. Our results are therefore a strong call for more research to gain a better understanding about the detailed causes, mechanisms and implications of infrastructure sector reform processes on capabilities and to examine more thoroughly the specificities and emerging time lags of these processes. A capability perspective on the performance of utilities and electricity sectors enables us to better explain unintended consequences of utility sector reforms – with power outages being one of the most severe consequences. The analysis of the causes that led to Eskom’s poor performance in securing electricity supply and the associated power crisis in South Africa exemplifies how the complex interactions of various factors influence a firm’s capability structure and, in turn, determine its performance. Electricity sector reforms and other public policies may affect the capability structure of utility firms in a fundamental and sometimes even irreversible way. In fact, regulatory changes can have a direct impact on the structure of organizational capabilities. A typical case in the electricity sector is the unbundling of power generation and network operations. Regulatory changes can also have an indirect effect on organizational capabilities due to Network Industries Quarterly | vol. 15 | no 4 | 2013

10

dossier

changes in the tasks and requirements a utility firm has to fulfill. These effects may be even more problematic to be anticipated as in the case of Eskom’s power plants operating under limited reserve margins. We can expect that many electricity utilities worldwide have difficulties in adapting their capabilities to a more competitive market environment or to technological changes related to decentralized renewable power generation, which are mostly stimulated and accompanied by new regulations. Direct and indirect effects on organizational capabilities may lead to severe and persistent capability gaps. These impacts of sector reforms and regulatory changes on organizational capabilities are certainly an underestimated issue among policy makers, utility managers and researchers. Traditional explanations of utility sector reforms have neglected the role of capabilities and have primarily focused on incentive structures and contractual concerns instead. Our findings suggest that, in addition to establishing adequate incentives and contracts, the impacts of policy reforms on the capability structure of utility firms determines decisively the success of the reforms themselves. In other words, even if the incentive structures are sufficiently established, the loss of substantial capabilities may undermine the intended outcome of such programs. Thus, reform programs need to involve adequate measures

to consider their influence on organizational capabilities. For example, if a government’s energy policy aims at promoting new electricity generation technologies, there is a need to take into consideration the emerging tradeoff between the achievement of an intended policy goal and the unintended effect on the capability structure, and to actively manage this trade-off. In this sense, the broad transformational and regulatory changes in the case of South Africa reflect and illustrate policy-induced situations, in which utilities might be confronted with the loss of qualified personnel and struggle to attract employees with the required experience. Summing up, we have laid out the basic argument that reform processes tend to have a fundamental impact on the capability structure of utility firms and therefore influence the performance of public service delivery. With these results, we add the capability dimension to the discussion of regulation, sector reforms and the governance of utilities. As a consequence, capability-related processes and their impacts need to be taken much more seriously in order to achieve successful utility sector reforms. Acknowledgements The authors greatly acknowledge financial support from the Swiss South African Joint Research Programme (SSAJRP), Project No.7.

Network Industries Quarterly | vol. | 2013 Network Industries Quarterly | vol. 1315 | n|on3o 4| 2011

11

Dossier

How can space based infrastructure and assets contribute to the monitoring of global threats? Pierre-Alain Schieb*, Claire Jolly**, and Barrie Stevens*** ABSTRACT Space based infrastructure and assets are in a unique position to help the global society monitor and provide early warnings to a variety of risks. This paper describes capabilities of the space based infrastructure in the global risk management, proposes strategies for exploiting these capabilities, and discusses requirements that need to be met for a successful exploitation. Recent global events, such as the so called “subprime crisis” in 2007, have had both short and long term consequences, although the first impact was quite fast to unfold. Other events, such as a major solar storm, could have global consequences in a very fast pace of 4 hours. Pandemics, by definition, are also of a global nature. Some less catastrophic events are, nevertheless, quite significant and although regional in a first place, they could create second order consequences of very large magnitude: the 2010 eruption of the Icelandic volcanic ashes was a good example of a regional event, although if more widespread because of the wind regime and duration it would have had global consequences. Local events in a globalised economy could also be the precursors of a major global crisis if it would happen in a major financial center such as London, New York or Tokyo. All in all, whatever the root cause, some events have the capability to disrupt the economic, social and environmental eco-systems, since the increasing concentration of assets and population in selected spots is creating a much more significant exposure to threats than in the past centuries. Moreover, a networked global economy is favouring the mobility of factors (e.g., people, goods, microbes, viruses, and bits of information) therefore many vectors are in a position to “export” to the rest of the world either the root cause itself (viruses) or the consequences (contagion in financial markets). To cope with this new context, the OECD report “Future Global Shocks” insists on the need to build the databases, models and surveillance mechanisms that could provide a monitoring capability at such a global scale. It was therefore tempting to question whether space based infrastructure and technologies could be part of the answer and to what extent. As a result an OECD project

was launched in 2012, and the results were discussed in an OECD High Level Conference of the Space Forum in November 2012. The new context of global risk management Recent years have witnessed a plethora of major disasters: earthquakes, tsunamis, floods, pandemics, food shortages, collapsing fish stocks, to name but a few. These disasters have all left their marks. The economic cost of natural catastrophes and man-made disasters worldwide amounted to some USD 370 billion in 2011. Continuing population growth, climate change, the rapid expansion of cities, the concentration of economic assets, the pace of globalisation and increasing interdependence are all likely to ensure that the 21st century will witness more and increasingly costly shocks, some familiar, others new. Many of these disasters, though large in scale, will only have a national or regional impact. Others may be bigger, spilling across national borders and disrupting essential global value chains. Yet, others could be truly global in nature severely affecting several continents at once and calling for special approaches and measures. Climate change; Climate change is emerging as one of the greatest long-term challenges the society faces today. The global warming leads to modifications in rainfall patterns and fresh water availability, rises in sea level, increase in extreme weather events, and varying effects on plants, wildlife and human activities. Since a degree of uncertainty is still attached to the various predictions and the science underlying them – as demonstrated by the long-standing worldwide scientific and political debate on these matters – better data, analysis and science are needed to further our knowledge both of climate change and of its effects on the natural environment and human activity.

* Pierre-Alain Schieb, Professor and Chairholder NEOMA Business School, Former Head of OECD Futures Projects and Space Economy Forum, [email protected] ** Claire Jolly, Policy Analyst, OECD Space Economy Forum, [email protected] *** Barrie Stevens, Consultant, Former Head of International Futures Programme, [email protected], with support from Anita Gibson, anita. [email protected] This text is adapted from an article “Monitoring global threats: the contribution of satellite technologies”, OECD, November 2012 Network Industries Quarterly | vol. 15 | no 4 | 2013

12

dossier

Population growth and concentration of economic assets; The world’s population has already reached 7 billion people, and current projections see it rising to over 9 billion by the middle of this century. Rising human consumption will continue to place severe pressure on the earth’s ecosystems, through the over-harvesting of animals and plants, and the extraction of natural resources from land and sea. In parallel, increasing urbanisation has resulted in a rising number of megacities around the world with high concentrations both of people and assets in relatively small, compact areas. With such dense convergence of populations and collective wealth around geographic centres, the risk of a catastrophic event producing severe damage and loss has risen significantly.

Some 32 global threats are described in the WEF report and mapped. After an extensive review with experts from the space community and industry, it was concluded that about 10 (about one third) of the global threats would not primarily benefit from space based technologies and infrastructure. Most of the threats which cannot directly benefit from space based technologies are related to major challenges that have no physical footprint that you can easily monitor with space based sensors: for example, retrenchment from globalisation, global imbalances and currency volatility, liquidity/credit crunch, economic disparity, fiscal crisis are not good candidates. It could be argued that second order impacts of such potential root causes could be observed by satellites (level of economic activity for example) but basically other tools are better positioned to monitor such global threats.

Figure 1. The global footprint of a geomagnetic storm, April 2000 (OECD 2011)

Growing likelihood of cascading effects; In today’s tightly interconnected world, the effects of extreme weather events, environmental disasters, or critical infrastructure breakdown can cascade quickly across a country’s economy and society. And when mobility, economic interconnectedness and supply chains attain the global dimensions we see today, then those same cascading effects can scale up to a level that affects many other countries and, indeed, other continents. The 2011 floods in Thailand, where one-third of the world’s hard disk drives are produced, had a domino effect on shipments of hard disks, affecting supply chains and prices across the international industry. Disruptions to air freight carriers’ hubs, usually due to extreme weather conditions (e.g., snowstorms, cyclone alerts), tend to result in bottlenecks and delayed deliveries to some key industries around the world. When several hubs are affected, the cascading effects can be even more pronounced, as in the case of the 2010 volcanic eruption in Iceland, which produced an ash cloud over much of Europe’s airspace. With numerous major air hubs paralysed, many companies were unable to deliver products or key components to markets and production systems throughout Europe.

Figure 2. Global Risk Landscape (WEF, 2011)

As a result, a selection of about 20 other major threats were part of the review by the OECD project team.

1) What are the major future global threats? Most risk registers are maintained by national security authorities but are not disclosed nor published. Therefore, in order to start the assessment phase of space based capabilities with a universe of future global threats, the OECD study took the World Economic Forum (WEF) risk Forum map of 2011 as the reference. Network Industries Quarterly | vol. 15 | no 4 | 2013

13

dossier

2) What are the capacities of space applications in helping monitor major risks and provide early warning? The modern global economy is characterised by interconnectedness, greater complexity, heightened vulnerability, and faster propagation of the effects of disruption and disasters. In this new risk management context, space tools (earth observation, telecommunication, navigation, positioning, and timing) are found to be well suited to addressing many of the major threats reviewed in the study, both by allowing the acquisition of crucial data and the monitoring of propagation pathways, and by forming back-up hubs for telecommunications when needed. Their usefulness spans several risk management functions: Initial assessment of risks: Space cannot provide all the information required for risk assessments. Risk mapping, however, does benefit considerably from satellite earth observation and positioning. The scientific accomplishments due to satellite use are already numerous, as satellite missions have brought on major scientific breakthroughs particularly in climate observation and earth resource monitoring (e.g., satellite detection of long-term damage to the ozone layer leading to the passage of the Montreal Protocol in 1987; and detection and monitoring of the dramatic changes in the extent of Arctic sea-ice coverage). For geological hazards, remotely sensed topographic data, combined with precise positioning, provide unprecedented mapping of landscape and architectural characteristics, allowing the detection of surface fault-lines. Even with major risks, less obviously suited to satellite deployment, such as epidemics, satellite data are increasingly used: epidemiology combines medical parameters, weather conditions, entomology and general land use information to detect possible tipping points in disease occurrences (e.g., dengue fever, malaria). Forecasting and monitoring risks: Building on their capacities for risk assessment, satellites play a crucial role in global routine surveillance. The data they constantly collect is in many cases used in scientific models, providing essential forecasting capabilities; and when an event occurs, they contribute to monitoring the propagation of threats (e.g. tsunami progression and impacts, the development of severe droughts and their effects on global food supplies). As an illustration of space systems’ contribution to forecasting capabilities, significant improvements have been achieved in weather predictions over the past decade, due in large part to a larger international fleet of improved meteorological satellites, bringing about substantial gains in the accuracy of forecasts of large-scale weather patterns in both hemispheres. This has directly benefited early warnings of major hydrometeorological hazards (such as cyclones, thunderstorms, heavy snowfall, floods and heat waves, to name but a few). With respect to the monitoring, the ubiquitous surveillance capability of satellites is applied to international borders and transportation hubs. These systems, based on imagery and real-time

tracking, combined with other surveillance mechanisms, contribute to detecting and tracking the cascading effects of illegal practices or accidents (e.g. tracking illegal fishing operations; spread of piracy; sea pollution and accidents impacting populated coastal areas (fisheries, tourism and ecosystems). Dissemination of warnings: In addition to warnings relayed by authorities to millions of people via commercial satellite television broadcasts, a number of operational early warning systems rely on satellite transmissions to dispatch realtime data alerts. Tsunami warning systems, for instance, are complex networks using data from seismic networks, buoys at sea and communications from ships, transmitting data via satellites. One conclusion to be drawn from recent tsunami events is that warning systems could be improved by further developing the density of the existing networks of stations and promoting the inclusion of other sensors, like continuous real-time global positioning observations. Rapid response: Satellite links often represent the only option in places in the world where ground systems are not deployable. This is particularly true of telecommunications networks. Examples are the high seas, remote and sparsely populated regions, and land areas devastated by natural disasters. In the wake of the earthquake and tsunami in Japan in 2011, satellite was the only viable route for telecommunications for almost two months.

Figure 3. A glimpse at sea traffic via satellite monitoring of ships’ automatic identification system (AIS) (Source: Norwegian Space Centre)

3) How well positioned are space infrastructures and services to deal with major risks in future? Four challenges should be tackled if space technologies’ contribution to monitoring major threats is to be significantly improved: Challenge 1. Meeting the daily needs of very diverse users - A more systematic approach to the use of space-based capabilities has emerged in recent years both nationally and internationally, but a key challenge that remains is meeting the daily needs of users, not only scientific organisations but also various governmental agencies, international organisations, local planners and private users (e.g. fishermen, farmers). The close relationship between data users and data producers that exists in the case of weather applications is the exception rather than the rule. In other areas, such as risk management applications, the customer base is large and diverse, with very different levels of expertise. The need to demonstrate added-value and costefficiencies should become the norm for new technological solutions integrated in early warning systems. Network Industries Quarterly | vol. 15 | no 4 | 2013

14

dossier

Challenge 2. Remedying the gaps in the coverage of space systems - Several of the risks identified in the report depend on an observing system involving a crucial satellite component. However, potential users are often little inclined to learn how the information they need is actually produced. They are more concerned about the timeliness, accuracy and pertinence of the information and services. For satellite data to contribute fully and effectively to many of the early warning systems identified, the systems must be implemented and operated in such a manner as to ensure that gaps are addressed (i.e. satellites’ revisit time, adequate resolution, real-time reactivity when necessary, sustained data products and archiving). This is a major technical and resource challenge.

Figure 4. Vegetation map based on data from sensors carried on different satellites (Source: NASA)

Challenge 3. Exploiting the benefits of technology convergence - The third challenge involves capitalising on the potential complementarities that exist among emerging strands of technological innovation, and exploiting the benefits of technology convergence both within and outside the space sector. The benefits of technology convergence will increasingly be used in major applications from aeronautics (with intelligent aircraft that will become more and more automated and satellite navigation guiding to cope with increasing air traffic), to the role of new information technology applications (such as crowd sourcing for realtime information in times of disaster) and biomarkers in managing major health risks. Although the opportunities for open innovation are growing, one key challenge is the need to operationally integrate very different systems so they can work together. Challenge 4. Processing large amounts of data and integration - The fourth challenge concerns data needs – how to generate and access more relevant data, how to improve analysis and evaluation of those data, and how to facilitate data sharing among sectors, institutions and countries. Key data challenges exist concerning the actual development of the required systems and the sustainability of the respective earth observation, communications, and navigation infrastructures. Space applications already have to manage an extraordinary array of diverse data –

from geospatial information and raw satellite imagery, for example, to real-time sensory data feeds. Climate parameter observations for example are currently measured by several organisations for a variety of purposes. However, a variety of different measurement protocols are used, which results in a lack of homogeneity in the data (in space and time). This heterogeneity limits the use of the data for many applications and constrains the capacity to monitor and assess weather, natural resources and climate evolutions. Help will no doubt be forthcoming from major advances in data processing and storage (e.g. improvements in supercomputing, use of clouds, etc.), but such progress may also need to be accompanied by greater sharing of data and calibration of information across national, international and institutional borders. Conclusion Space based technologies, taken as an infrastructure for monitoring global threats, do not represent a panacea for monitoring all major global threats. However, as part of an integrated communication system, they are in a unique position to provide important services to society. For example, space based infrastructure can complement or even take over when other ground systems have broken down, for example emergency communication in case of major disasters can rely on space based linkages, as well as the electronic transfer of financial data. Another example is the inclusion of spacebased capacities in the forthcoming system of air traffic in Europe (SESAR) and USA (Nextgen) that will deliver automated traffic monitoring and aircraft pilot systems: the system could not only improve the capabilities, safety, and fuel efficiency of airlines, but it will itself provide a unique system of climatic and weather sensors, as each aircraft could carry data sensors complementing the existing collecting data networks (satellites, ground, oceanic buoys). Of course, to be efficient and reliable, space based technologies have to be deployed as an infrastructure. They share the same requirements as ground-based infrastructure in terms of coverage, continuity of service, safety, cost efficiency, and sustainability. To conclude, a number of space based systems have moved from being originally pioneering scientific missions or R-D demonstrators to full-scale infrastructure, which need to be managed as such. References:

OECD Future Global Shocks: Improving Risk Governance, OECD Publishing. World Economic Forum 2011, Global Risks 2011, Sixth Edition. An initiative of the risk response Network

Network Network Industries Industries Quarterly Quarterly || vol. vol. 15 13 | noo 43 ||2013 2011

1515

Dossier

The Postal and Courier Services as Critical Infrastructure Toni Männistö ABSTRACT This essay clarifies why postal and express services are critical for the society, what events could potentially disrupt postal and courier networks, how could we mitigate the risk of such events. Why are postal and courier services critical for the society? Postal and express courier services are integral elements of a country’s overall logistics and communication infrastructure. Posts and courier companies complement the electronic means of communication and diversify the variety of logistics services by carrying messages and relatively small goods. National legislations specify the scope and quality criteria for basic postal services that the national postal operators (the Posts) must fulfill. In Switzerland, for example, the national operator Swiss Post is legally obliged to deliver letters, parcels, newspapers and periodicals up to 30 kg to all permanent settlements in Switzerland at least five days a week, charge uniform and affordable price for all domestic mailings of the same type and class, and provide a “reasonably convenient” access to the basic postal services to all people living in Switzerland. Posts operate typically dense and functional nationwide delivery networks, and their mail carriers visit virtually every household in a country on a daily basis. Speed, time-definite delivery, extensive tracking and integrated door-to-door service are the hallmarks of the express courier service that is often described as “the business class for cargo” because of its higher price and superior service in comparison to the postal and freight services (EEA 2013). The global express courier market is dominated by four industry giants: UPS, FedEx, DHL, and TNT. Many countries consider the postal and courier services national critical infrastructure. Criticality means that a failure or disruption in the postal and courier services can result in substantial economic and social losses, even human casualties. The following sections exemplify critical roles of the postal and express courier services. 1) Enabler of competitive business models The postal and courier delivery services enable business models that involve shipping of small shipments of physical products directly to consumers’ homes (e.g., mail order retailers, online auction sites, and press houses).

Posts and couriers also handle the reverse flow of returned, damaged, and outdated products from customers back to manufacturers and retailers. Besides, the postal and express courier service provides fast and simple access to foreign markets, especially for small and medium size enterprises. Fast express courier service enables manufacturers to rearrange their production strategies for higher efficiency and better customer service. For example, manufacturers can use the time saved in express transit to move from the make-to-stock to the assemble-to-order fulfillment strategy. Pushing the order penetration point towards the upstream of the value chain decreases finished product inventory, increases responsiveness to uncertain demand, and makes the manufacturer able to tailor products to customers’ specifications. Also, the time-definite express service enables companies to adopt synchronized justin-time logistics, where materials arrive precisely when needed in production, for higher cost-efficiency. 2) Reliable messenger The role of Posts and express couriers as messengers has been rapidly diminishing in the information era. However, the postmen and couriers still facilitate communication between citizens, authorities, and businesses by carrying contracts, love letters, patent applications, diplomatic correspondence, exams, votes, visas, passports, and many other documents. In fact, Swiss Post still delivers on average 18,8 million letter-size items a day in a country of around 8 million inhabitants. Moreover, the physical “snail mail” plays a particularly important role as a back-up communication channel if electronic means of communication fail. 3) Courier of time-critical supplies Only express couriers offer fast enough delivery services for time-critical goods and documents. For instance, the health care sector, where timely deliveries are a matter of death and life, employ the express couriers to transport medical supplies, including medicines, laboratory samples, and blood. Critical spare parts are another example of time-critical products. Organizations across

Toni Männistö, EPFL Chair MIR; [email protected]

NetworkIndustries IndustriesQuarterly Quarterly||vol. vol.13 15| |nnoo34 | |2011 2013 Network

16

dossier

industries, in particular airlines, power plants, automobile manufacturers, and armed forces, rely on fast spare part deliveries to minimize costly downtime of operations. Short transport times are crucial perishable consumer goods like fruits, fish, and flowers. 4) Aide in emergency logistics Authorities could exploit the nationwide postal and courier networks to organize emergency logistics. The postal systems, for example, could distribute antidotes to the entire population within 24 hours to contain spreading of a contagious disease in the event of a bio-terror attack or a pandemic outbreak. The postmen and the couriers could also contribute to relief logistics missions by delivering food, shelters and medicines into disaster zones as soon as conditions allows safe operation. 5) Expeditor of military mobilization The defense forces could use the postal and courier networks to call citizens to arms, especially when a surprise attack has disabled electronic means of communication. The postal and courier networks could also supply a countrywide militia with weapons and ammunition. The supply of ammunition by mail could be a strategy in Switzerland where many male citizens keep their military weapons, but no ammunition, at home. What events could disrupt the postal and express services? A disruption refers to a period when Posts or express couriers are incapable of offering the standard services. Frequent late deliveries and pick-ups indicate partial disruptions, and unavailability of the service manifests signs a complete disruption. The postal and express courier services are exposed to a range of hazards that can trigger disruptive events. Technically speaking, a disruption occurs if a disruptive event decreases capacity of one or more key resources of the postal and courier services, for example, when most postmen cannot work. The table below links the key resources to disruptive events. 1) People Letters and parcels do not get picked up, sorted nor delivered without the joint-effort of mail carriers, sorting center staff, truckers, engine drivers, customs officers, pilots, flight controllers, and many other professionals. Thus, strikes in key personnel groups can result in extensive and prolonged disruptions in the postal and courier services. The same applies to terrorist attacks or propaganda that can make people too terrified to work. Also a pandemic can take its toll on the workforce, and floods, earthquakes and other natural disasters may affect the employees’ ability to work and commute.

2) Facilities Sorting centers and airports are the chokepoints of the postal and courier networks. In Switzerland, for example, each postal shipment travels through at least one of the four sorting centers (two for mail, two for parcels). On top of the sorting centers, airmail shipments are often routed through multiple airports. Accidents, natural disasters, terrorism, power outages and sabotage could result in a closure or congestion in one of the logistics chokepoints and delay the services. For example, the discovery of suspicious white powder caused a mass evacuation and a daylong shut down of one of the four Swiss Post’s sorting centers in September 2012 (20min 2013) . 3) Transport routes1 The routing of postal and express shipments depends on availability of roads, rails, airspace, bridges, tunnels, airports and other transport infrastructure. Therefore, disasters and accidents, that block transport routes, often delay also the postal and express courier services. In April 2010, a volcanic ash cloud suspended the air cargo and mail traffic in the Northern and Western Europe skies for four days. At its worst, the ash cloud delayed or cancelled 29 % of the global scheduled flights (IATA 2013). Later in October 2010, authorities discovered two air cargo bombs en route from Yemen to the US. Al-Qaeda claimed the responsibility for the attempted bombings. The event trigged an immediate regulatory response from the side of the US authorities: unprecedentedly stringent air cargo and mail security regulations entered into force overnight and delayed substantially US-bound postal and express traffic. 4) Vehicles The effective postal and express services rest on an operational fleet of vehicles: vans, mopeds, trucks, bicycles, trains, and airplanes. Some operators might 1 Laboratory tests revealed that the suspicious powder was potato flour. Network NetworkIndustries IndustriesQuarterly Quarterly| |vol. vol.13 15| |nnoo34 | |2011 2013

17

dossier

be tempted to operate with a uniform transport fleet to reduce cost of maintenance and training. However, the reliance on a single car or aircraft model exposes the operators to the risk of model-specific defects that could immobilize the entire fleet. For instance, two Japanese airlines grounded all their Boeing 787 planes in response to a perilous emergency landing of one of their 787s (The Guardian 2013). Besides, sabotage and discontinuations in fuel supply could make planes, cars and other vehicles inoperative. 5) Information Most Posts and express couriers exchange electronic messages internally, with their business partners and authorities to coordinate logistics, settle payments and fulfill legal requirements. The dependency of modern postal and courier services on ITC infrastructure makes the Posts and express couriers vulnerable to power outages, cyber attacks, solar storms and other events that might incapacitate information and communication infrastructures. Internet, telephone networks, and satellite navigation systems are vital for efficient logistic management. 6) Service providers External service providers are increasingly important facilitators and enablers of postal and express courier services. The Posts often buy services from private road and air carriers as well as partner with local shopkeepers to expand their post office network cost-efficiently. In the airmail domain, specialized companies commonly take care of security screening and handling of airmail shipments. If a key supplier faces a sudden bankruptcy or loses its license to operate (e.g., air cargo screening company), the postal and courier services could be delayed. How can we mitigate the risk of disruptions in postal and courier services? This section gives a brief outlook on possible strategies that postal and courier companies could implement to reduce likelihoods of the disruptions and dampen their impact. The three strategies follow the largely contours of the analysis of Stecke and Kumar (2009). 1) Robustness – “Design and build strong” Proactive strategies aim to reduce likelihood and effects of disruptive events. Effective proactive strategies eliminate or reduce sources of hazards (risk avoidance) and increase a system’s ability to absorb shocks of disruptive events while continuing normal operations (robustness). The posts and express couriers could establish their logistics hubs (especially sorting centers) in “safe” locations that are not likely to be affected by natural disasters, conflicts or failures in power supply or other critical infrastructures. The operators should also prefer established and tested IT systems to new, potentially vulnerable ones. Selection of financially stable service providers reduces the risk of bankruptcy of logistics 18

Network Industries Quarterly | vol. 13 | no 3 | 2011

partners. Fostering close relationships between employees and labor unions may reduce the possibility of strikes. Basic and advanced security solutions like access control, background checks of job candidates, camera surveillance and burglar alarms reduce risk of crime and terrorism. 2) Early warning ability – “Monitor relentlessly” No system is one-hundred-percent robust. A disruptive event may overwhelm all safeguards no matter how robust a system is. Because all hazards cannot be eliminated, organizations need monitor their operational environment and build capabilities to cope with inescapable disruptions. Effective early warning strategies increase the time that organizations have to prepare for imminent disruptive events. Some hazards, like hurricanes, develop over time giving vigilant organizations plenty of preparatory time to rearrange their operations. On the other hand, events like earthquakes and terrorist attacks are hard, if not impossible, to predict. To get early information about looming disruptive events, the Posts and courier companies can monitor labor union activity, weather forecasts and service providers’ financial situation. Moreover, the postal and courier operators can leverage tracking and tracing information to get first-hand information on delays, lost mail items and other anomalies in the mail delivery pipeline. 3) Resiliency –“cope with the aftermaths” Resilient systems are able to bounce back to its original state and resume operations after being disrupted. Redundancy, flexibility, and investigative capabilities increase a system’s resiliency by reducing length and limiting the extent of disruptions. Redundancy refers to a systems’ extra overall capacity that can be used to offset lost local capacity. Flexibility refers to ability to redeploy existing resources to cover loss in capacity in other locations. Investigative capabilities enable organizations to identify and eliminate the sources of disruptions fast. Posts and couriers companies can increase redundancy by investing in spare sorting capacity, reserve vehicles, and back-up IT systems. Redundancy can also shield postal and courier services from failures of other key infrastructure sectors. For instance, own power generators and fuel reserves enable mechanized sorting and motorized collection and delivery in the event of power outage or fuel shortage. Posts and express couriers can increase their flexibility by cross-training employees, increasing their mode-shifting capability and making plans for alternative routings. Tracking and tracing, own security investigators and close collaboration with law enforcement authorities enable posts and express couriers to cope with disruptions arising from intentional criminal or terrorist activities. For example, fast investigation and resolving is paramount in mail bomb and ‘white powder’ attacks as only locking offenders behind the bars can restore the public confidence to the safe postal and courier services. In particular, participation in government-driven voluntary supply Network Industries Quarterly | vol. 15 | no 4 | 2013

18

dossier

chain security programs like the C-TPAT (Customs-Trade Partnership Against Terrorism) may allow the Posts and express couriers to resume their operations faster in the aftermaths of a terrorist attack. Conclusion The reliable postal and courier services are matters of necessity rather than convenience. Disruptions in the postal and courier services of meeting damage commerce, undermine public administration, jeopardize patient safety and weaken the governments’ capability to respond to emergencies. Against this backdrop, the designation of postal and courier services as part of the national critical infrastructure seems justified. This essay identified six key resources that underpin postal and courier services: people, facilities, transport routes, vehicles, information and service providers. Each resource is vulnerable to a range of disruptive events that may disrupt the postal and express courier services The Posts and express courier companies can mitigate the risk of disruptions. They can design and build stronger business processes that are able to absorb shocks of

disruptive events or avoid them altogether. The Posts and express couriers benefit also from constant monitoring of their operational environment. Early warning on imminent disruptive events gives the Posts and couriers time to prepare themselves to deal with disruptive events. Finally, the Posts and courier companies should invest in aftermath capabilities to decrease extent and length of unavoidable disruptions. Highly resilient postal and courier services, which recover rapidly from disruptions, bring tremendous benefits for the society. References

20 min, 220 Menschen aus Briefzentrum evakuiert, online newspaper. Accessed 9. August 2013 at: http://www.20min.ch/schweiz/zuerich /story/ 23214565 European Commission, 2005. Green paper on a European Programme for Critical Infrastructure Protection. Official Journal of the European Union. IATA 2010, Press Conference Berlin. Accessed 9. August 2013 at http:// www.iata.org/pressroom/ speeches/ Pages/2010-04-21.aspx The Guardian 2013. Accessed 9. August 2013 at http://www.guardian. co.uk/business/2013/jan/16/787-emergency-landing-grounds-787 Stecke, K.E. & Kumar, S., 2009. Sources of Supply Chain Disruptions, Factors That Breed Vulnerability, and Mitigating Strategies. Journal of Marketing Channels, 16(3), pp.193–226.

Network Network Industries Industries Quarterly Quarterly | vol. | vol. 13 15 | no| 3no|42011 | 2013

19

Dossier

What are the pre-requisites for managing a critical infrastructure such as Air Traffic Management? Marc Baumgartner*, Valerie November**, and Anthony Smoker*** ABSTRACT Failures of air traffic control systems can result in disastrous consequences, and this is why many countries have designated air traffic management as critical infrastructure. However, is the tried and tested philosophy for critical infrastructure protection applicable in the air traffic management? 1. Background The regulatory arrangements and requirements for Air Navigation Service providers aim to achieve the provision of a highly reliable and continuously available service to a range of aircraft operations in the air. The System that achieves this comprises three elements: people, procedures and technical or the engineered system. All of these elements – people, systems and procedures – that are put in place are constantly backed-up by procedures, technical systems and contingency and/or recurrent training for personnel, which offers the possibility to work in a degraded mode. This is mainly achieved by critical technology being designed to meet specific reliability criteria and being backed-up by technology that provides redundancy to the System – one or two systems and independent emergency technology. For example, typically a VHF Radio system, which serves the purpose of exchanging communication between the ground and the airborne actors, will be backed up by an identical slave system and additionally a completely independent emergency radio. Initial and refresher training makes sure that the Air traffic controllers are competent and capable of providing an equivalent or a degraded service to the aircraft under their control. In the remote and spectacular event, that an unforeseen event is experienced that does not fall under the Single European Sky’s Hazard and or Risk assessment methodology1, ad-hoc solutions have been found. Is the tried and tested philosophy of the past and present sufficient to cope with some of the new critical infrastructure challenges that will confront Air Traffic Management (ATM)2? 1 EC COM 670/2011 2 Air Traffic Management (ATM) includes Airspace Management (ASM), Air Traffic Flow Management (ATFM) and Air Traffic Services (ATS). Together with alerting services and Flight Information services Air Traffic Control is part of the later (EASA NPA 2013-08)

2. The context According to 2013 figures from the International Air Transport Association (IATA), close to 3 billion people flew safely on 37.5 million flights in 2012. This means that each day approximately 100,000 flights departed from approximately 9000 airports around the globe and arrived safely at their destination. Between 8,000 and 13,000 airplanes are at all times in the air3. The average growth of the passenger figures is around 5% per year (2012 5.5%) and International Civil Aviation Organization (ICAO) estimates that by 2030 about 6 billion passengers will be carried every year. Any commercial flight taking off from an airport has to follow a given set of routes and indicate the intended planned route through the route network, in four dimensions via a flight plan. These routes form a dense network of “aerial highways” in the sky and span the globe (fig1). Air traffic control is responsible for controlling all of the flights in the air at any given moment and any given location around the globe. From the closing to the opening of the aircraft doors, a commercial flight will follow air traffic control instructions and be under the surveillance mechanisms available to ATM. Air traffic control (ATC) is the service provided by ground-based infrastructure including air traffic controllers who direct aircraft on the ground and through controlled airspace, and can provide advisory services to aircraft in noncontrolled airspace. It forms one part of the ATM triad. The primary purpose of ATC worldwide4 is to prevent collisions between aircrafts, organize and expedite the flow of traffic and provide information and other support for pilots. In some countries, ATC plays a security or defense role, or is operated by the military. According to the International Federation of Air Traffic Controllers’ associations (IFATCA) there are roughly 70’000 air traffic controllers controlling (on a 24h basis 7/7) air traffic 3 ZAHW – AirTraffic LIVE 4 ICAO, Annex 11 13th edition (valid from 14.11.2013)

* Marc Baumgartner is an active air traffic controller in Geneva, Switzerland, former President of IFATCA ** Valerie November, Research Professor (Directrice de recherche) CNRS, LATTS-ENPC, ParisTech ***Dr. Anthony Smoker is Manager Operational Safety Strategy at NATS, UK, and works for IFATCA o Network Industries Quarterly 4 | 2013 20 Network Industries Quarterly | vol.| vol. 13 | n15 3| n| o2011

dossier

globally. Operational and technical regulation at global level is organized by ICAO and the contracting states (numbering 191) of ICAO guarantee the translation of the Rules and Standard Recommended Practices at national level Based on the Article 28 of the Chicago Convention, the individual states are responsible for air traffic control both over their territory and the airspace allocated to them over the High Seas by ICAO (Baumgartner & Finger 2013). 3. Organisation of Air traffic control From an organizational point of view, the global Air Traffic Control can be broken down into 4 main blocks, volumes or layers. Air Traffic Control Towers (the most visible part of ATC) are normally located at the airport. A control tower unit is responsible for the control of all the movements on and around the close vicinity (up to 50 km) of the runway. This includes aircraft and some of the vehicles on the tarmac. The approach units control the traffic which departs or arrives at the airport (up to 180 km) and can be located anywhere. The En-route or Area Control Centre5 controls the rest of the airspace. In Europe, there are 67 Area Control Centers, which are divided into maximum 1380 sectors (volume of the air). For the areas where there is no radar-based surveillance, in particular the large portion of the oceans, the control is done from Oceanic control centers. They control without radar surveillance and this control is referred to as NonRadar (e.g. the North Atlantic is controlled by Reykjavik, Prestwick and Santa Maria on the European side and New York and Gander on the North American side). The traffic routes across the North Atlantic vary according to the optimal minimum time track that provides benefits to aircraft operators. Approximately 2000 movements cross the Atlantic on a daily basis.

Figure 1. ICAO global air traffic according to density

5 In Switzerland two Area Control Centre exists – one is located in Dübendorf the other in Geneva. About 3200 movements are controlled from these centers on a daily basis.

4. A Structured Approach to ATM Responses in case of Degraded Operations 4.1. Capacity A structured approach for the better identification of the current and future challenges for Air Traffic control is being proposed, with regard to criticality of the ATM infrastructure and service provision. All air traffic control units around the world, be it an Air traffic control tower controlling traffic at an airport, or a large Oceanic air traffic control Centre (e.g. Oakland US is controlling 10% of the earth surface over the pacific ocean; Hemdale, 2013) use a system of declared capacities. Capacity in Air Traffic Control is defined by IFATCA as “The maximum number of flights that may enter a sector per hour averaged over a sustained period of time, to ensure a safe, orderly and efficient traffic flow”. Factors such as traffic flow direction, coordination procedures, in-sector flight times, technology available and so forth are criteria which lead to the calculation of the declared capacity. At airports, elements such as runway occupancy time, available parking position, geographical and airport design elements also play a role when setting a capacity for an airport. Normally sector or runway capacity is expressed in hourly rates or values. In some cases the declared capacity of an ATC functional unit may build in a margin for contingency e.g. aircraft emergency. 4.2. Reduced Capacity In Air Traffic Management, when a factor (ATS, ATFM or ASM) influences or contributes to non-normal operations, capacity is frequently adjusted or reduced according to preestablished scenarios. These scenarios are identified through the functional hazard and risk analysis prior to declaring the capacity value. Weather phenomena (Performance Review Body 2013) such as Cumulonimbus, storms and related turbulence can lead to a reduction of the capacity. Technical equipment degradation such as loss of radar display functions, radio two-way communications and/ or any other technical equipment have specific reduced capacity values. Reduced capacity values can go hand in hand with emergency procedures but can also be applied in normal operations where procedures are conditional on the availability of a facet of the technical system or where features of the operational environment dictate a change (e.g. Low Visibility Procedure at an airport where the separation for the landing intervals is increased to provide enough spacing between aircraft performing automatic landings) Some of these capacity reductions are used and published ahead of events i.e. strategically (e.g. storm warning6 or preventive technical interventions) and the airspace users can plan accordingly. In the case of special events (e.g. the London Olympics), a form of regulation called PPR (Prior Permission Required) is used to increase predictability when the overall system is close to 100%. 6 http://www.businessweek.com/news/2013-10-27/u-dot-k-dot-transportoperators-prepare-for-worst-storm-in-five-years Network Industries Quarterly | vol. | 2013 Network Industries Quarterly | vol. 13 15 | no| 3no|42011

21

dossier

Capacity reductions of such nature result in less aircraft being accommodated at any one time and therefore in certain regions of the world (e.g. Europe) the aircraft experience a ground holding delay. In other regions of the world or when the event is happening while the traffic is in the air, en-route holdings are instructed to aircraft in the air. 4.3. From an ’Empty the Airspace’ scenario to ‘0 Rate – Progressive but Rapid Process’ Some emergency scenarios foresee a reduction in the offered capacity to zero at a rapid pace. This is valid for en-route ATC as well as for the airport air traffic control units. The real emergency for the air traffic controllers is when the passage from normal operation to degraded mode is rapid and the volume of traffic is close to or at the maximum of declared capacity. These scenarios are trained for by control teams in the simulator and follow established scenarios, backed up by emergency manuals and checklists. If a failure is operationally significant and no solution can be found to “empty the sky”, so-called “0-Rate scenarios” are applied. This means that no more traffic is accepted in the given airspace (sector or airport). The aircraft still flying have to hold or detour in order not to enter in the given sector which is not functioning anymore. Due to the network operation of air traffic control, these scenarios are absorbed by neighboring or adjacent air traffic control units or airports. When the Area Control Centre in Zürich (located in Wangen) was evacuated due to a (faulty) fire alarm on 16.1.2013, the adjacent area control centers such as Geneva, Munich, Vienna, Milano, Rome, Reims, Karlsruhe, the airport of Zurich, absorbed the en-route traffic bound for the Zurich airspace. Diversions to other destinations were coordinated and some aircraft had to hold en-route. From a network resilience point of view, one could indicate that the loss of an ATC unit is absorbable, though with a lot of additional effects on neighboring ATC units. Such operations are part of the normal performance variability that ATC and ATM accepts as normal business which lead to an unexpected event that needs to be catered for and managed (including emergency type responses) by the neighbors. Permanent loss of ATC facilities and/or closure of airspace will create additional workload for some 2 – 4 hours to the adjacent facilities until alternative scenarios and adequate capacity are put in place. The Network manager7 publishes since 2012 reports which reflect the impact on the network of such losses of ATC units in terms of minutes of delay8. 4.4. Catastrophic or Crisis Scenarios and ATC Eurocontrol’s9 European Safety Plan defines the following type of operations: • Normal operations are situations in which all 7 Eurocontrol, Network manager Annual Report, June 2013 8 Currently a minute of delay in Europe is worth 81€ in the calculation of the costs (PRR2012, Eurocontrol) 9 http://www.eurocontrol.int/services/european-safety-programme 22

Network Industries Quarterly | vol. 13 | no 3 | 2011

elements of the system (including staff) are functioning as intended. Minor faults may need to be resolved, but they do not place restrictions on the systems and staff, and all routine tasks are achievable • Degraded modes of operation arise when problems in the underlying system occur. These are expected but are not considered normal. Staff have procedures for dealing with these situations and the risks associated with any failure are not considered significant. Reduced staffing levels are considered as an example of degraded modes • Crises are adverse events that need not force a move from the operations room. More serious than degraded modes, they last for a shorter time than contingency, but may be severe in nature. Examples might include strikes, floods & fires, security incidents, and bomb warnings • Contingency represent a situation in which it is necessary to move from the standard operations room. These may be more long term than crises, and results in an interruption to the ATM service. Experience with catastrophic scenarios in ATC, shows that it has diverse impacts on the resilience of an Air Traffic control system. Solar storms10 which distort the GPS signal or lead to a loss of satellites used by aviation to communicate and navigate are managed and weatherrelated catastrophic scenarios (e.g. Hurricane Katarina and Rita11 2005) in the strategic phase, lead to a reduction of capacity and/or closure of portions of the airspace. Earthquakes and Tsunamis are examples from the recent past (e.g. Fukushima 2011, Christchurch 2011) have been managed on a complete ad-hoc case-by-case basis12. Where the ground infrastructure (airport and Communication, Navigation, Surveillance) equipment is not affected, Air traffic control becomes part of the relief operations (e.g. Thailand and Bali 2004). Where the ground infrastructure is destroyed, ATC ceases to function (e.g. Airport of Sendai 2011) if no emergency infrastructure is available (NATS UK has built recently a remote location to provide ATC in reduced mode at Heathrow, by building an emergency tower located in a safe place)13. In war zones, various concepts of mobile or advanced combat ATC units14 are used to replace destroyed or nonavailable ATC infrastructure. Other than in war zones, direct terrorist attacks on ATC Facilities are not known, although bomb threats have been experienced. Together with the FAA and the EU, ICAO has started to work on cyber vulnerability of ATC infrastructure. Due to the monopolistic nature of Air Traffic Management (ATM), there has been little competition and innovation 10 http://easa.europa.eu/events/docs/2013/03/20/EASA_ SIB_2012-09_.pdf 11 http://www.natca.org/press_releases.aspx?aID=1716#1716 12 Not for all situations do procedures exists and the organization adapts and this provides resilience and the critical system quality to sustain operations and limit the effect on aircraft operations 13 http://www.nats.aero/news/worlds-first-approved-remote-atc-contingency-facility-unveiled/ 14 GATCO, Transmit Winter 2010-11 Controlling Afghan Airspace Network Industries Quarterly | vol. 15 | no 4 | 2013

22

dossier

in the respective national markets. As a result, ATM uses a relatively antiquated technology (on the ground), as compared to the more innovative technological developments on the airborne side (Baumgartner & Finger 2013). Lately it has been identified that the arrival and use of new technology in ATM increases the risk of vulnerability and new trajectories for perturbations that can be controlled in very limited ways. The EC15 has put a strategy in place on cyber vulnerability which also includes Air traffic control. It is foreseen that Air Traffic control will become more vulnerable to large-scale cyber-attacks as the future systems will be based on large-scale satellitebased (e.g. EGNOS) surveillance, communication and navigation systems (Johnson 2013). 5. Is Air Traffic Control part of the Critical infrastructure? The notion of Critical Infrastructure is recent and vague (November 2011). Generally it can be described as: a term used by governments16 to describe assets that are essential for the functioning of a society and economy. Other definition (SWISSRE 2010) describe it as being a physical or intangible asset whose damage or disruption would seriously undermine public safety, social order and the fulfillment of key government responsibilities. One could argue that this has been translated already by ICAO, though ICAO talks more about the establishment of an effective safety (Universal Safety Oversight Audit Program USOAP) and security (Universal Security Audit Program USAP) oversight system. Under the USOAP states are expected to implement safety oversight of critical elements in a way that assumes the shared responsibility of the State and the aviation community. Critical elements of a safety oversight system encompass the whole spectrum of civil aviation activities, including areas such as aerodromes, air traffic control, communications, personnel licensing, flight operations, airworthiness of aircraft, accident/incident investigation and transport of dangerous goods by air. Several countries17 around the globe have identified air transport as being part of the critical infrastructure (CI). Some (Eurocontrol 2012) even have explicitly mentioned Air Traffic Control as part of the CI, based on the more network centric future management of, for example, the European airspace. EC Pilot studies18 and research (Johnson, 2013) have identified the need to integrate into the European CI new concepts such as the European Network Manager19, as well as SESAR with its satellite based elements, as a possible candidate for the criticality of complex systems and their interdependencies. For an actor such as the Air Navigation Service Provider (civil and military), the oversight authority elements such as security, safety and sustainability are part of its 15 Proposal for a Directive of the European Parliament and of the council concerning measures to ensure a high common level of network and information security across the Union - COM (2013) 48 Final – 7/2/2013 16 http://www.dhs.gov/what-critical-infrastructure 17 E.g. Switzerland see ppt Graf M.-A., Programme de protection des infrastructures critiques, 2008 18 Pilot study under discussion to be jointly conducted by Eurocontrol and Joint Research Centre to identify and designated as part of the review of Directive 2008/114/EC by DG/HOME 19 EC 677/2011

daily activities. Risk management has become an integral part of the Safety Management systems that states and ANSPs have to put in place, although no real “branding” has occurred in the Air Traffic Control infrastructure to date. Additionally, ANSPs can also standard business risk management processes. This raises the issue of when a safety issue is a business risk and vice versa. Sustainment of service provision is fundamentally a business risk, but can have safety implications for example In most of the world’s modern Air Navigation Service Providers the notions such as ‘redundancy’, ‘continuity’, ‘degraded mode’ and ‘emergency situations’ form an inherent part of the regulatory requirements and significant resources have been put in place to cope with the continued, safe and sustainable provision of service. Typically Air traffic controllers who handle the busiest air traffic control sectors of Europe up to 30 aircraft at any given moment, will work with systems which are up to 3 times redundant (Baumgartner 2007). Many of the flight operations also cater for unavailability of airports (due to meteorological phenomena such as snow, flooding and/or storms) by carrying additional fuel for diversion purposes. Modern aircraft are equipped with vital systems to communicate, navigate and fly in a redundant manner. Recent accidents attributed to the Air Traffic Control System show that there is a constant need to have the Air traffic control systems designed and operated with redundancy as an integral part of the system (Milano 2001; Ueberlingen 200220). Following the publication of the European Union Regulation21 on critical infrastructure, European Member States and associated states have to put in place National Critical Infrastructure Plans. Some EU member states have listed Air Traffic Control as part of the Critical Infrastructure. Switzerland22 has put in place such a plan and Skyguide (the national Air Navigation Service Provider for Switzerland) is part of this plan. Nearly all ATC units around the world have a form of back up for a certain time period (be it in case of an energy supply failure or a breakdown in the communication infrastructure). Though examples have shown that in certain cases (ATC Canarias23) the backup may not last long – and other forms of redundancy will have to be found. From past experience with continental disruption, ATC, though affected, remained fully functional. When the US closed its airspace on the morning of 9/11, the ATC infrastructure was working and went through a crisis situation (having to manage the closure of the airspace). The Canadian neighbors decided to close their airspace (Yellow Ribbon) after having provided ATC to all the traffic travelling to the North America. It closed 20 Bezirksgericht Bülach, Urteil 21.8.2007 21 Council Directive 2008/114/EC of 8 December 2008 on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection (Text with EEA relevance) 22 Swiss Federal Council, National Strategy for Critical Infrastructure, 27.6.2012 23 ATC Magazine, Año XVI. No 64, primavera 2010

o o Network Industries Quarterly vol.| n15 4 | 2013 23 Network Industries Quarterly | vol.| 13 3| n| 2011

dossier

its airspace, while the ATC infrastructure remained functional. During the eruption of the Icelandic volcano Eyjafjallajökull in Spring 2010, ATC infrastructure was not affected. It could be inferred that the impact of all the possible disruption as described herein has a direct impact on the use of the infrastructure, but not necessarily the ATC infrastructure. Following the outbreak of Eyafjallajökull, the European Commission together with the Network Manager of Eurocontrol established the ‘European Aviation Crisis Coordination Cell (EACCC)24 which will be activated if a major crisis affecting air transport should happen again. Despite all the above, based on the currently available information, it is difficult to decide what measures are needed over and above the afore -described ones that Air Traffic Control would need to prepare itself for a continental-wide crisis affecting the infrastructure at large. The figure 2 below tries to explain this schematically with seven events. No more ATC services but the infrastructure functions: Terrorist attacks of 09.11.2001 in the US where an emergency response to the SCATANA (US) and the Yellow Ribbon (Canada) 5000 aircraft were landed within less than hour in the US airspace and over 600 flights were accommodated at the East coast of Canada. The system continued to work perfectly and even exceeded the designed performance The volcano Eyafjallajökull (14.4.2010) where part of the European airspace was closed but the ATC infrastructure 24 EC 677/2011, Art. 18 and 19 24

Network Industries Quarterly | vol. 13 | no 3 | 2011

remained fully functional. Evacuation of ATC unit in Zurich (16.1.2013) and emergency scenario applied for 20 minutes. No more ATC services followed by a destruction of the infrastructure Weather related Hurricane Katarina (25.8.2005) where strategically the airspace was closed ahead of the and subsequently some of the ATC units were destroyed. Sudden and degraded services followed by full destruction The maritime earthquakes followed by tsunamis in 2004 (Bali, Thailand, Sri Lanka etc.) and 2011 (Japan Sendai) where the ATC infrastructure was destroyed and no more service was offered. Power outage followed by gradual degraded service followed by reduced service. Las Palmas (03.01.2010) following a power outage of a longer duration the emergency batteries did not last sufficiently long enough. The operations room was moved to the adjacent simulator and a reduced service was delivered from their (ad-hoc measure has never been tested before). 6. Conclusion Air traffic control has inherent features that could be understood or recommended as critical infrastructure, and indeed, some countries consider the ATC as part of the critical infrastructure. Future Air traffic management and air traffic control technology will increase the network centric and continental scope of Air Traffic Control and Network Industries Quarterly | vol. 15 | no 4 | 2013

24

dossier

placing it into the assets which can be described as critical. Elements of SESAR (i4D and SWIM)25 are by nature relying on net-centric and satellite based technology which will make the future ATM elements crucial with regard to critical infrastructure elements. The traditional air traffic control involves redundant elements, and it is inherently resilient. There is certainly a need to further explore the elements which might need an increased focus to be addressed as part of the Critical Infrastructure risk management. In particular new elements such as the European ATM Network manager and some of the proposed improvement measures of SESAR will have to critically looked at – as the proven and established oversight mechanism established by ICAO might not be sufficient to address the critically it of these continental assets. The article outlines the current situation with regard to an uncoordinated and patchwork approach to the issue at hand. ATC will be increasingly part of the critical infrastructure of a nation and as well of the management of the air traffic over the European skies. A lot of research and discussion is needed to make the ATM world a bit more aware of the challenges and nature of the Critical Infrastructure.

References

BAUMGARTNER M. & M. FINGER, European Air Transport Liberalization: Possible ways out of the Single European Sky gridlock, 2013 to be published in Utilities Policy. BAUMGARTNER, M. (2007). Air Traffic Management. The Organization and Operation of European Airspace. London: Ashgate. CRN Report, Focal Report 1 Critical Infrastructure Protection, Zurich, October 2008 CSS ETH Zurich, Das Konzept der Resilienz: Gegenwart und Zukunft, Zurich October 2013 HEMDAL H., Sea Changes in Air Traffic Technology international 2014, UKIP, UK 2014, ISSN 1366-7041 JOHNSON C.W., Cyber-Attacks on Safety-Critical Industries, presentation 2013 JOHNSON C.W., Linate and Ueberlingen : understanding the role that public policy plays in the failure of air traffic management systems, 2006, University of Glasgow NOVEMBER V., Rester connecté à tout prix in Geographica Helvetica Jg. 66 2011 / Heft 2 PRB, EU-Wide Targets for RP2 indicative Performance Ranges, February 2013 SCHOLL W., et al. , Critical infrastructures in Switzerland and the provision of essential goods and services, published in Integrative Risk Management: Fostering Infrastructure Resilience, www.cgd.swissre.com, 2012

25 www.sesarju.eu

o o Network Industries Quarterly vol.| n15 4 | 2013 25 Network Industries Quarterly | vol.| 13 3| n| 2011

Dossier

Why do we still have major accidents? – Lessons learnt from the chemical industry Richard Gowland ABSTRACT Why do disastrous accidents keep on reoccurring? This article discusses modern risk management practices in the chemical industry.

Introduction Are events, such as the fire and explosions at Texas City and Buncefield and the inundation of the Fukushima nuclear power plant, so unusual that they escaped the risk management process of the responsible operators? Trying to make sense of these events leads me to ask some questions: • Do we have the right tools for risk management? • Is our thinking and risk management dominated by ‘credible’ scenarios to the point where the worst imaginable cases are consigned to the ‘negligible frequency’ risk category? • Do we spend enough effort on exploring possible causes of the worst cases and managing them? • Are we complacent about our Hazard Identification and management processes? If the serious events had been seen as of a realistic possibility, in each case, a fairly simple examination of the possible causes and existing protections would have been enough to reveal the vulnerabilities of these infrastructures to serious disruptions. These vulnerabilities were well documented by official and unofficial reports. If we think of these as warning signs, in the cases of Texas City and Fukushima, some of these emerged prior to the event but recommendations were not fully implemented (Acton and Hibbs 2012). In these cases, there was plenty of evidence that serious events in the operations in relevant industry or natural environment had occurred with significant frequency in the fairly recent past. Somehow, the lessons from these events seem to have been overlooked, forgotten or discounted. In 2004, the European Process Safety Centre Technical Steering Committee raised the concern that although the overall number of Process Safety incidents was falling, those which did occur seemed to be very severe. This resulted in a move towards: • More accurate means of recording incidents • An added severity metric and • Managing the precursors of disasters more effectively.

Figure 1. Trailers at Texas city (left) and Buncefield fire (right)

The topics in the debate and group work included Process Safety Incident reporting – though support of the new American Petroleum Institute Incident Indicators (API RP754) (API 2010), the CEFIC Responsible Care® Process Safety Incident system, Loss of Primary Containment programmes, Safety Critical Systems, Leading Indicators and ultimately a group which researched the subject of ‘Atypical scenarios’. Our risk management processes aim to identify potential hazardous events, analyse and eliminate them where possible and provide sufficient control and protection for the remaining risk. These processes have served us well when the possible scenarios have been identified, although the ‘worst cases’ sometimes present special challenges. What remains to be accomplished is the identification of all possible scenarios. Major accidents, such as the Texas City and Buncefield disasters, show us that we either did not identify or anticipate the events, which occurred eventually, or we assumed that they were so unlikely as to be of an acceptable likelihood or ‘had never happened’ or even, not worth of a comprehensive study. Were these ‘atypical scenarios’? The same pattern emerges from studies of the Fukushima nuclear power plant tragedy in Japan where large tsunamis have been experienced several times in the last 500 years, but advice from the International Atomic Energy Agency on protection against these events seems to have been discounted by industry and the government (API 2010). So, how can we improve our ability to find Network Industries Quarterly | vol. | 2013 Network Industries Quarterly | vol. 13 |15 no|3no| 42011

26

dossier

and deal with these ‘atypical’ scenarios (Acton and Hibbs 2012)?



Figure 2 The Fukushima nuclear plant before and after disaster

Our hazard identification methods, such as Process Hazard Analysis (PHA), Hazard and Operability and ‘What if ’ studies, are quite effective when sufficient creativity is able to identify what we can call ‘atypical’ scenarios. The other tools such as Fault Tree Analysis, Layer of Protection Analysis and Quantitative Risk Assessment can then address a complete set of scenarios to help us manage risk comprehensively. The studies carried out with Hazard Identification and Risk Assessment tools appear in some cases to come up short where worst cases are concerned. Efforts seem to be dominated by ‘credible’ events. The European Process Safety Centre (EPSC) has a working group that tries to identify best practices which offer an improvement in scenario development which addresses these missing ‘atypical’ scenarios. The results of the work are encouraging and offer a way ahead. The work builds on strengthening and enhancing the tools we already use by adding dimensions that appear to have been missed in the past. EPSC’s report describes practical steps which, when properly applied, will close some of the gaps in our process risk management systems. As Paltrinieri et al. (2011) explain, we can categorise events into four classes: • ‘Known knowns’ - Events which we know about and can plan to prevent or control • ‘Known unknowns’ – Events which we can predict even if they have not occurred yet • ‘Unknown knowns’ - Events which have occurred but we have failed to remember and study (e.g. loss of corporate memory) • ‘Unknown unknowns’ - Events which we have so far failed to predict or have been dismissed as unrealistic. We might see how our Hazard Identification and management processes can be used for each class. For example, PHA and HAZOP (Hazard and Operability study) fit well into the task of finding the ‘known knowns’ and ‘known unknowns’ – so long as our thinking is sufficiently open to considering the worst case consequences. The ‘unknown knowns’ and ‘unknown unknowns’ seem to present problems which may expose weaknesses. There is no excuse for failures in corporate memory or failing to apply learning experiences from well known

events. If we really think, the worst imaginable event can be described as ‘never happened yet’ can we be sure? The fact that events or initiators similar to the examples here had happened in the memorable or recorded past seems to have been overlooked. They seem to fit neatly into the ‘unknkown knowns’ category: • Have we forgotten? • Did we fail to research? • Did we discount as being not applicable or not realistic? In the last case, at least we considered it and hopefully based decisions on technical factors such as process, protective barriers and mitigation. In the case of Buncefield, a trawl through history (Paltrinieri et al. 2011) indicates that vapour cloud explosions resulted from gasoline tank overflows 7 times in the last few decades. So it does not seem to be an ‘unknown unknown’. We are left with ‘unknown unknowns’ which might be the final resting place of the real failures. It seems unreasonable to be criticised for the occurrence of something we could not possibly have imagined. If it was really true that we could not possibly have imagined it, I might be sympathetic. I suspect that these cases would be very rare. Where we are now Process Hazard Analysis is often driven by a questionnaire which embodies much of the learning experience of the company. A more detailed formal examination of worst cases within the analysis has been shown to yield good results. This includes a strict requirement to cover relevant events from history from the industry and predefined worst cases. As an example, the U.S. Environmental Protection Agency Risk Management Plan (RMP) requires that Vapour Cloud Explosion is included in studies for any flammable material (Kleindorfer et al. 2003). Simple but vital, I think – even if the physical properties, conditions of use and environment make it ‘unlikely’. It is recognised that the apparent detonation which occurred at Buncefield may not have been predictable. However, a deflagration model would have predicted extensive damage on and off site. Was this missed? A HAZOP Study is frequently carried out in the steady state and reliance is often dominated by ‘credible’ versus ‘worst cases’. Furthermore, the worst cases may be consigned to the mitigation offered by emergency plans. Here, there seem to be missed opportunities which might be helped by starting with the worst cases and working backwards through a HAZOP process to determine root causes and what has to be true or ‘fail’ for the worst case to occur. Risk Assessments such as LOPA and QRA will not be fully effective if they are not presented with the scenarios to study. There is an opportunity to make a much more strict inclusion of potential events from the technology and history which might not be known by today’s generation o o Network Industries Quarterly vol. 4 | 2013 27 Network Industries Quarterly | vol.| 13 | n15 3| n | 2011

dossier

of operations. Conclusions We might conclude that: • We sometimes fail to identify some significant scenarios. • We might be unaware of events which have happened in the past and could apply to us. • So called ‘unknown unknowns’ are in many cases to be found in history or in a more creative approach to worst case scenarios and their management. The EPSC ‘Scenarios’ group all have a formal three-phase approach to Hazard Identification: • Project Management • Normal Operations • Management of Change The Hazard Identification method is usually built into the Process Hazard Analysis and HAZOP methodologies although member practices are not identical. Where HAZOP is concerned, all members carry out studies in the steady state, but HAZOP is not always conducted for start up and shut down phases. These critical phases are not always overlooked but are covered by detailed instructions which include potential hazards and their consequences. The predominant cases in these studies are ‘credible’ and ‘from learning experiences’ and rely very much on the discipline and creativity of a properly constituted and competent team. Whilst efforts are made to study, worst cases may occur in HAZOP. Events seem to show that we are not always successful. Indeed, even when a worst case scenario is considered, HAZOP may not be the best method to study it. If this is true, the ‘bow tie’ has potential to become the method of choice. What comes out of this and a review of company practices? It could be an approach saying that we need to gain consistency from our Hazard Identification practices:

28

• Address steady state comprehensively (e.g. HAZOP or FMEA or ‘What if ’) • Ensure that complementary start up and shut down studies are included in Hazard Identification (and study) • Include worst cases at an early stage And there is much to be gained from: • Critical task analysis and • Human error analysis In predicting atypical events and managing them better and exploit them for the ‘known knowns’, ‘known unknowns’, ‘unknown knowns’ and use a creative approach to imagine the ‘unknown unknowns’ which can be studied with ‘bow tie’ analysis and perhaps, controversially, study them with a ‘reverse’ HAZOP approach where we start with the worst case consequence and work out what can initiate or fail for the full impact to be realised. I conclude that there are very few ‘unknown unknowns’. Certainly, the three major events described here are not ‘unknown unknowns’ and furthermore - We may imagine that the likelihood of all the holes in the Swiss Cheese aligning is very unlikely or unimaginable for these events… but can we be sure? References:

Acton, J., M., Hibbs, M.. (2012). Why Fukushima Was Preventable. Available: http://carnegieendowment.org/2012/03/06/why-fukushima-waspreventable/. Last accessed 9th Dec 2013. American Petroleum Institute, 2010. API RP 754 - Process Safety Performance Indicators for the Refining and Petrochemical Industries Paltrinieri, Nicola, Alessandro Tugnoli, Sarah Bonvicini, Valerio Cozzani, and Tecnologie Ambientali. “Atypical scenarios identification by the DyPASI procedure: application to LNG.” Chemical Engineering Transactions 24 (2011): 1171-1176. Kleindorfer, Paul R., James C. Belke, Michael R. Elliott, Kiwan Lee, Robert A. Lowe, and Harold I. Feldman. “Accident Epidemiology and the US Chemical Industry: Accident History and Worst‐Case Data from RMP* Info.” Risk Analysis 23, no. 5 (2003): 865-881.

Network Industries Quarterly | vol. 13 | no 3 | 2011 Network Industries Quarterly | vol. 15 | no 4 | 2013

28

Dossier

Establishing the National Critical Infrastructure Inventory in the Context of the Swiss Critical Infrastructure Protection Programme Stefan Brem ABSTRACT Based on previous methodological research and practical experience, Switzerland has established a classified national inventory covering specific critical infrastructure objects from 28 critical sub-sectors.chemical industry. Introduction With the Federal Council’s approval of the national strategy to protect Switzerland’s critical infrastructure (CIP strategy) in June 2012 (Federal Council, 2012), the establishment and further development of a Critical Infrastructure (CI) inventory has become a crucial cornerstone in the national Critical Infrastructure Protection (CIP) programme. Already in 2009, Switzerland has, for the first time, prioritized its critical infrastructure sub-sectors. Based on this experience and further methodological developments, it was possible to establish a CI inventory from a national perspective by the end of 2012. The classified results from this process are used for various prioritization and preparation planning activities.

Short review of sub-sector criticality As an important starting point, it was crucial not only to identify the critical infrastructure sectors and sub-sectors on the national level, but also to establish a methodology to prioritize them from a rather generic national perspective (FOCP, 2010). This allowed for more specific and dedicated analysis in the prioritized critical sub-sectors. The methodology of the sub-sector criticality considered three main components: the (inter-) dependencies between the critical sub-sectors, the consequences of a loss of service of the respective sub-sector on the population, and the consequences of a loss of service of the respective sub-sector on the economy. For the assessment, a generic

Figure 1: Infrastructure sub-sectors listed by their criticality (FOCP, 2013) The Head of Risk Analysis and Research Coordination, Federal Office of Civil Protection, Bern, Switzerland

Network Industries Quarterly | vol. 4 | 2013 Network Industries Quarterly | vol. 13 |15 no|3no| 2011

29

dossier

total loss of the sub-sector availability during three weeks was considered. It was conducted with experts from the federal administration in a Delphi-like workshop and validated by the Swiss working group on CIP covering same 25 federal agencies and two cantonal representatives. In the dependency analysis, both the number of connections between the subsectors, but also their “strength” was assessed. The table in the Annex A illustrates this analysis with the original 31 sub-sectors. The population impact both included the assessment of the rough number of people affected, but also the seriousness of affectedness (from no disruption of daily life to serious disruption of daily life including deaths and injuries). The economic impact included both the direct economic consequences of a loss of service in the sub-sector itself, but also ripple effects in the dependent sub-sectors. The results of this first criticality assessment were also included in the basis CIP strategy and approved by the Federal Council in July 2009 (Federal Council, 2009). From sub-sector to object level criticality In order to not only identify and prioritize the critical infrastructure sub-sectors, but also the specific critical objects, the methodology was further refined and incrementally applied. The refined methodology includes four steps on the national level (FOCP, 2010b): • As a first step, in every of the 28 sub-sectors, a functional mapping highlights the critical processes and “supply chains” of the critical goods and/or services to be produced, managed, stored, distributed (etc.) in the respective sub-sector. On a generic level, the functional mappings include a branch related to the production of the critical good and/or service, process management, task management (incl. crisis management), logistics, R&D, governance. • Based on this mapping, the relevant object groups

such as power plants, substations, data centres, train stations, airports etc. are determined in a second step. • In a third step, the related threshold levels are defined for every relevant object group previously determined. The methodology in Switzerland differentiates between five levels – from a local level relevant to a municipality up to a national/international level. • In a fourth step, the individual CI objects are compiled and evaluated by their individual output potential (both quantitatively and qualitatively) and hazard potential (for example dams and chemical facilities). The methodology is compatible with the EU approach, but its focus lies on national importance rather than crossborder effects. Nevertheless, the CI Inventory not only considers cross-sectoral, but also international aspects. Collaboration with CI operators The Federal Office for Civil Protection (FOCP), which bears the overall responsibility for the national CIP Programme in Switzerland, has developed the methodology and also steered the identification process leading to the CI Inventory. The FOCP closely worked together with the federal lead agencies of the respective sub-sector, such as the Federal Office of Energy in the area of power supply, for example. Additional federal and Cantonal agencies were included as well as the leading national provider association and the main critical infrastructure operators and owners in the respective critical sub-sectors. The identification process was launched incrementally in the individual sub-sectors to better include the relevant actors and to further improve the methodology. Overall, the methodology proved to be very systematic and pragmatic as it provided reasonable guidance to conduct the identification process in all of the 28 sub-sectors as

Figure 3: Prioritization process to establish CI Inventory (FOCP, 2013)

30

Network Industries Quarterly | vol. 13 | no 3 | 2011 Network Industries Quarterly | vol. 15 | no 4 | 2013

30

dossier

diverse as cultural assets, fluvial transport, oil supply, and waste management, just to mention four of them. Main application of the inventory The inventory has become a recognized instrument with the CI operators and public agencies for further planning and prioritization activities in the area risk and disaster management. In that respect, it serves preventive as well as preparedness and reactive tasks, including strategic business continuity management. More particularly, the classified information is shared with trusted partners as appropriate to conduct more specific vulnerability assessments, to support the prioritization process in the context of the national economic supply (e.g., the distribution of electricity in a situation of power shortage) and other federal resources, to support CI operators’ specific planning activities and CIP activities by the Cantons – to name just a few. The Cantons are currently invited to include the findings from the national level identification process in their Cantonal risk and disaster management processes and to complement the national inventory with their Cantonal CI objects. Nominally, the current version of the CI inventory includes only specific objects. But conceptionally, it also considers the underlying processes and supply chains. This further increases its value as a planning tool in the context of strategic business continuity and resource management. The way forward For the first time, the CI Inventory was assembled with the newly established methodology by the end of 2012. Currently, the Cantons – as described above – are invited to complement the national inventory. The inventory will be regularly updated with new relevant information and will be thoroughly reviewed every four years. By then, it will also be fully integrated in the various prioritization and preparation planning activities. Given the current and on-going discussions on cyber security,

data protection and integrity remain high priorities when it comes to data sharing. Finding the right balance between information sharing with relevant partners and – at the same time – protecting sensitive information continues to remain high on the agenda. References

Federal Council, 2009: Federal Council (2009) The Federal Council’s Basic Strategy for Critical Infrastructure Protection: Basis for the national critical infrastructure protection strategy. Federal Council, Bern, May 18, 2009. Federal Council, 2012: Schweizer Bundesrat (2012) Nationale Strategie zum Schutz kritischer Infrastrukturen. Bundesrat, Bern, 27. Juni 2012. FOCP, 2010a: Bundesamt für Bevölkerungsschutz (2010a) Schlussbericht Kritikalität der Teilsektoren. BABS, Bern, 11. September 2010. FOCP, 2010b: Bundesamt für Bevölkerungsschutz (2010b) Methode zur Erstellung des SKI-Inventars. BABS, Bern, 22. Oktober 2010. FOCP, 2013: Brem, Stefan (2012) Presentation on the Swiss CIP Programme at the APPSNO Conference, Singapore, April 9, 2013.

Further information If you would like to find out more about the Swiss national CIP programme please visit our website at www. infraprotection.ch or email [email protected]. About the author Dr. Stefan Brem has joined the Federal Office for Civil Protection within the Swiss Federal Department of Defence, Civil Protection and Sport in March 2007, where he leads the section on Risk Analysis and Research Coordination. His unit is responsible for the national programme on Critical Infrastructure Protection (CIP) and the disaster risk assessments on the national and Cantonal level. Prior to his current position he served for four years at the Federal Department of Foreign Affairs’ Centre for International Security Policy, where he was responsible inter alia for CIP, Energy Security, Security Sector Reform, Border Security and Private Military Companies. He completed his dissertation in Political Science with the University of Zurich in 2003.

o o Network Industries Quarterly vol. 4 | 2013 31 Network Industries Quarterly | vol.| 13 | n15 3| n | 2011

dossier

Annex A

Figure 2: Table to assess the sub-sector criticality (FOCP, 2010a, p. 9)

32

Network Industries Quarterly | vol. 13 | no 3 | 2011

Network Industries Quarterly | vol. 15 | no 4 | 2013

32

c o nfe r ences

con ferences

The Transport Area of the Florence School of Regulation The Florence School of Regulation (FSR) is a partnership between the European University Institute (EUI), the Council of the European Energy Regulators (CEER), the Independent Regulators Group (IRG) and the European Platform of Regulatory Authorities (EPRA), and it works closely with the European Commission. The focus lies on the regulation of the Energy sector (electricity and gas), the regulation of the Communications & Media sectors and the regulation of the Transport sectors. The FSR’s objective is to expose the European dimension to regulatory topics and to contribute to the safeguarding of the common good of Europe by ensuring high-level and independent debate and research on economically and socially sound regulation. The FSR provides a European forum where academics and practitioners can meet and share their views and knowledge. It does this by: • providing state-of-the-art training and encouraging knowledge sharing with the most up-to-date tools, • organising policy events, conferences and executive seminars that deal with key regulatory issues, • promoting international networking through knowledge and practice exchange, • producing analytical and empirical research in the field of regulation. To learn more visit our website: www.florence-school.eu

FSR-T: Forthcoming Events 2013 Title 2nd Florence Intermodal Forum

Date 3 March 2014

6th Florence Air Forum

24 March 2014

8th Florence Rail Forum

28 April 2014

5th Florence Urban Forum

6 May 2014

3rd Conference on the Regulation of Infrastructure Industries

13 June 2014

New workshop paper series This special edition of the European Transport Regulation Observer focuses on the European Air Transport Executive Seminar, which aimed at providing a comprehensive overview of key issues surrounding the new Single European Sky package and at providing the opportunity for a timely and frank exchange of views on selected aspects of SES upcoming evolution thanks to the presence of selected senior managers. Download this Editorial by Prof Matthias Finger (EUI/EPFL) on “Next Steps in Achieving the Single European Sky”.

For more information on our activities please contact: Communications & Media

[email protected]

Transport

[email protected]

Energy

[email protected]

Network Industries QuarterlyQuarterly | vol. 15 | n| ovol. 4 | 13 2013 33 Network Industries | no 3 | 2011

33

con ferences

34

Network Industries Quarterly | vol. 13 | no 3 | 2011

Network Industries Quarterly | vol. 15 | no 4 | 2013

34

con ferences

o 35 Network Network IndustriesIndustries QuarterlyQuarterly | vol. 15 ||nvol. 4 13 | 2013 | no 3 | 2011

35

a n n o u ncements ann ncements

ann o u nce-

IGLUS Innovative Governance of Large Urban Systems

2014 - 2015 edition

Executive Masters on:

Innovative Governance of Large Urban Systems Chair Managment of Network Industries - MIR

In collaboration with:

If you are concerned about:

• the performance of cities (sustainability, competitiveness, quality of life, innovation), • the performance of the urban infrastructure systems (transport, energy, communication, water, greens), and • how governance relates to such urban performance ... then EPFL’s Executive Masters in Innovative Governance of Large Urban Systems (IGLUS)) is your right choice.

*EPFL, is one of the best universities in the world, well known for the quality of its research and education, and located at the heart of Europe.

IGLUS Masters: 5 modules of two-weeks intensive training, Taking place in 5 different cities, Offered by EPFL, in collaboration with: Michigan State University, United States Tecnologico de Monterrey, Mexico Hong Kong Uni. of Science and Technology, China American University of Sharjah, UAE Kadir Has University, Turkey

And a capstone master thesis

Application system for the 2014-2015 module is now open.

WWW.IGLUS.ORG

36

Network Industries Quarterly | vol. 13 | no 3 | 2011

General Application Requirements: Educational background: Master degree, or equivalent Language: the program will be taught in English Working experience: 5 years of professional working experience Completing the application form, accessible via our website: WWW.IGLUS.ORG

questions?

Please feel free to send us and email ([email protected]).

Contact Information Website: E-mail: Telephone: Fax:

WWW.IGLUS.ORG [email protected] +41 (0)21 693 00 03 +41 (0)21 693 00 80

Network Industries Quarterly | vol. 15 | no 4 | 2013

36