RMS Titanic

9 downloads 6827 Views 230KB Size Report
Marconi wireless, an important safety feature allowing ... around the year 1900, with fierce competition, more ... The fierce competition mentioned above led to.
The RMS Titanic disaster seen in the light of risk and safety theories and models Jon Ivar Håvold Ålesund University College

ABSTRACT: The continuing public fascination with the sinking of RMS Titanic more than a century ago is shown by the huge interest on discussion forums on the Internet, the rebuilding of a new Titanic cruise vessel and, for example, that more than 830,000 people passed through “Titanic: The Exhibition” during its sixmonth run at Florida International Museum in 1998, the most popular exhibition at the museum ever. In this paper the Titanic accident is analysed using risk and accident theories. The questions discussed are “Could a Titanic accident happen today?” and “Could the Titanic accident have been foreseen?” 1. INTRODUCTION Only one disaster the last 100 years remains etched in people's awareness and imagination a century after the accident: TITANIC. 35 movies have been made with themes from the accident, several books have been written and there are around 15 million listings on Google. One hundred years ago the highly travelled route across the North Atlantic was very lucrative. Ship builders constructed ocean liners that grew larger and faster with each generation. The driving forces in the market were speed, size and luxury. Among the shipping lines that were in an endless war for dominance was the White Star Line. In 1902 the White Star Line was sold to an American financier J. Pierpont Morgan, who became interested in shipping companies because of the growing passenger traffic to North America. It was Morgan's money that allowed the dream of the "Olympic Class" liners Olympic, Titanic and Britannic to come true. When the Titanic left Queenstown in Ireland in the afternoon of Thursday 11th April 1912 she was following what had been accepted since 1899 as the outward-bound route for mail steamers to the United States. The selection of the route was based on the importance of avoiding areas where fog and ice were prevalent at certain seasons, without lengthening the passage across the Atlantic. On the 15th April, at

2.20 am, Titanic broke apart and foundered. 1,514 people lost their lives and 710 survived. 2. RISK / SAFETY THEORIES AND MODELS 2.1 Event / condition analysis and defence in depth (Swiss cheese model) An event network analysis (Kristiansen and Eide, 1994) helps us to get an excellent overview and simplify the accident analysis because it is easy to focus on the most important events leading up to the accident (Fig. 1). The model can easily be applied to an accident like Titanic, because the accident is so well documented (Kuntz, 1998; Report, 1998). The events and conditions in the network can be categorised and divided into several sub-events and conditions by their characteristics: Operator action (OA); Change in system conditions (SC); Function deficiency (FD); Environmental conditions (EC); Environmental events (EE) and Linking events (LE). An accident might extend over a long span of time where the level of risk is building up, and, therefore, the course of events and conditions can be divided into different phases: Latent phase; Initiating phase; Escalating phase; Critical phase and Energy release. The latent phase is very important. James Reason (1990) distinguish between two types of errors: active errors, whose effects are felt almost immediately and latent errors whose adverse consequences might lie dormant within the system a 1

long time, only becoming evident when they combine with other factors to breach the system defences. Analyses of resent accidents like the Challenger disaster (Vaughan, 1996), show that the root of the accident often is present within the system long

before the accident appears. The same might be the case for Titanic. The accident might be rooted in the company culture and belief in superior technology, which in turn can give a false sense of security and thereby influence leadership and decisions.

OA Latent phase

Critical phase OA

EC Escalating phase

Nice weather conditions

OA Speed maintained

EC Several ice warnings on chosen rute

Initiating phase OA Prestige: Wants to reach NY on Tuesday evening

EE

The ship that could not sink

Increasing the number of active boilers

Several signals from Crow’s nest: Icebergs ahead

OA Signalling from Crow’s nest Iceberg right ahead

OA First officer close watertight doors in engine

12:15 pm Distress sent

OA 12:45 pm to 1:40 am. 8 rocets sent and light signals used

OA 12:25 pm to 2:17 am. 11:40 pm: The Loading lifei ceberg strikes boats. Some the ship on staronly half full board ten feets SC above keel. 2:20 am Bow section tore loos from ship. Ship sinks, Around 1500 killed Energy release

SC

LE

Choce of “standard” route FD

OA

OA

First officer Murdoc order stop. Full speed astern. But too late

12:05 Orders given to make life boats clear. OA

10:00 pm First offiocer Murdock arrives on deck LE Life boat capacity

Time

Event and condition network the Titanic accident

Figure1: Event and condition network the Titanic accident

If it was true that Captain Smith’s life at sea had been as uneventful as he told journalists, he might have built up huge self confidence with an attitude that he and his ship were invincible. Language problems that that can lead to communication problems can also been looked at as a latent error. The analysis of the latent phase focuses on factors that have been influential in the course of events leading to the accident. The fact that Titanic was perceived to be almost unsinkable1 might have led to a less cautious navigation, and a low priority on training. 1

The fact that Thomas Andrews (Titanic’s chief designer) had to tell Captain Smith after the collision that Titanic could only stay afloat two hours after the collision confirm this belief. On this information Captain Smith decided to evacuate the boat.

The initiating phase and the escalating phase are results of the latent phase. The most important event in the initiating phase seems to be prestige. Managing Director Ismay (White Star Line) and Captain Smith wanted to beat the record and reach NY by Tuesday evening. There were no discussions about sailing route and speed despite ice warnings. Behaviour seems to have been guided by habit and company culture. In the escalating phase they maintained and even increased speed. The bridge management (Captain Smith and First Officer Murdock) ignored signals from the crow’s nest and failed to respond to the situation. The false perception of the situation led to the wrong decisions, which in Titanic’s case was no decision, 2

based on lack of knowledge about manoeuvrability, company culture and lack of routines and training. James Reason (1990) uses a picture of Swiss cheese slices with holes to show a trajectory opportunity penetrating several defensive systems (defence in depth) resulting in an accident. This is the outcome of a complex interaction between latent failures and a variety of local triggering events. The chance of a huge disaster is small because the chance for such a trajectory of opportunity finding loopholes in the defences is normally small. On the other hand risk is a subjective construct. When the chance of a large scale accident is small (low frequency) even if the consequence of the accident might be enormous, mechanisms that relate to how we perceive risk make the perceived risk smaller than would be the case if it were assessed objectively. In the escalating phase the speed was maintained and signals from the crow’s nest ignored. When the situation became critical it was too late to avoid the accident. 2.2 The Domino Theory Heinrichs’ Domino Theory (Heinrich, 1931) claims that an accident is merely one factor in a sequence (ancestry and social environment; worker fault; unsafe acts together with mechanical and physical hazard; accident and damage or injury) and if the sequence is interrupted by elimination of one of the five factors that comprise it, the injury will not occur. Several “facts” can be looked upon as factors in an accident sequence for Titanic: the competitive environment; persons involved were ignorant of safe practice and did not react to warnings from other ships or the crow’s nest until it was too late and they were operating at an unsafe speed. 2.3 The iceberg theory (accident distribution ratio) Heinrich (1931) found that for every serious accident there were 30 warnings from near accidents, 300 warnings from near misses, and a very large number of unsafe acts and conditions. His message is that we must take all these pre-warnings seriously. Later other scientists have confirmed the “iceberg” or “pyramid” view of accident causation to be true (Bird, 1969); even if the ratios do not exactly match Heinrich’s ratio, they are comparable. Valid definitions and honest answers might be extremely difficult to obtain because, for different industries, organisations and people, perception and willingness to report accidents vary a greatdeal (Hill, et al.

1994). What is a near accident? What is a near miss? If we focus on the Titanic accident from the perspective of Heinrich’s Accident Distribution Ratios, the following support is indicated: 1. Before leaving harbour on April 10 on its maiden voyage, its suction snapped another ship’s ropes and nearly ran into it. The ships name was New York. (Near accident). 2. When leaving Southampton on April 10 one of Titanic’s coal bunkers was on fire. (Near accident). 3. April 14: the ship has received 7 ice warnings during the day, which if we stretch the definition can be seen at as near misses. 4. April 14: the lookout sees an iceberg dead ahead. The iceberg strikes the Titanic on the starboard side of her bow. Even if the numbers of recorded near accidents and near misses are small it seems to indicate that the case of Titanic supports the “Accident Distribution Ratio”. In this case, a serious accident came on the maiden voyage. For other ships serious accidents will never occur. Many statisticians find Bayesian statistics advantageous over frequency statistics, especially when modelling low probability high consequence events like this (Luxhøy and Coit, 2006). 2.4 High Reliability Theory (HRO) and Normal Accident Theory (NAT) HRO and NAT are two important schools of thought within the organisational literature concerning the issue of safety and reliability in complex technological systems. Since the two theories can be looked upon as “competitive” in a number of ways they will be treated together. HRO2 has its intellectual roots in a tradition within organisational theory that believes that hazardous technologies can be safely controlled by complex organisations if wise design and management techniques are followed. This conclusion is based on that argument that effective organisations can meet six specific conditions: leaders place high priority on safety; significant levels of redundancy exist; recovery and control are central elements; error rates are reduced through decentralisation of authority; there is situation-oriented management and organisational learning through a trial-and-error process.

2

Information used for describing the High Reliability Organisation Theory is from Weick (1987), Roberts (1990), Todd, LaPorte and Consolini (1991) and Sagan Scott (1993).

2

NAT3 argues that academic theorists often construct models of organisations whose behaviour are much more rational and effective than that displayed by complex organisations in the real world. Serious accidents in organisations are inevitable over time. Perrow analyses organisations and safety on two variables: Interactions, on a scale from linear to complex systems, and Coupling, ranging from loose to tight. Linear system have characteristics like easy access and replacement of equipment, dedicated connections, segregated subsystems, few feed back loops, direct information and the system is extensively understood. Complex systems have characteristics like proximity, interconnected subsystems, limited substitutions, feed back loops, indirect information and limited understanding. Characteristic of systems with tight couplings are that delay in processing is not possible, there is little slack in supplies, equipment and personnel, buffers and redundancies are designed in and there is substitution of supplies. Loose couplings have the characteristics that processing delays are possible, order of sequences can be changed, alternative methods are available, slack in resources are possible, buffers, substitutions and redundancies are available. 2.4.1 Titanic and NAT/HRO NAT would predict that accidents are inevitable in complex and tightly coupled systems like Titanic, while HRO would claim that accidents could be prevented through good organisational design and management. Perrow claims that disastrous accidents will occur eventually, and Titanic was the ship that it happened to by sheer chance: “Accidents are inevitable and happen all the time, serious ones are inevitable though infrequent; catastrophes are inevitable but extremely rare” (Perrow, 1994). Members of the HRO school claim that something close to organisational perfection is possible: “Hazardous organisations that engage in nearly error free operations” (Roberts; 1990); “A very low error rate and an almost total absence of catastrophic failure” (LaPorte and Consolini 1991). It seems that the difference between the two schools is large. However, Sagan (1993) finds them to be relatively close, only using a different point of view, and says: “Such imprecise language suggests that the two theoretical schools have a common estimate about the probability of dangerous accidents despite the strong difference in the tone of their conclusions: 3

Information used for describing The Normal Accident Theory is Perrow (1984), Sagan Scott (1993). Perrow (1994a), Perrow (1994b).

Perrow might look at a glass of safety and find it 1% empty; high reliability theorists may see a glass of safety as 99% full” (Sagan; 1993). NAT claims that safety is one of a number of competing objectives in most organisations and that high reliability theorists have an unrealistic view when they claim that safety has to be the number one priority among organisational objectives. We could ask if the organisational design and management on Titanic and at the shipping company focused on safety. From what we have learnt the answer must be both yes and no (Kunz, 1998; Behe, 1997). Yes, because the ship as furnished with state-ofthe-art technology like a double-bottomed hull and a complex systems of watertight compartments. With the watertight doors closed and her pumping arrangements Titanic could remain afloat even with severe damage, features that prompted the periodical The Shipbuilder to deem Titanic as “practically unsinkable”. In addition the management exceeded the Board of Trade’s requirements when it came to life-saving appliances, despite the need for over three time as many life-boats to accommodate all her passengers. Titanic also had the latest innovation, a Marconi wireless, an important safety feature allowing operators to transmit distress calls in the event of an emergency. No, because neither the culture of the ship-owner nor on Titanic was a “safety culture”, but a culture driven by competition and economic forces. Looking at the initial talks about building Titanic between managing director Ismay at White Star Line and Lord Pirrie (partner at Harland and Wolff), the overriding reason for ordering the ship was the competitive environment on the route across the North Atlantic. Safety seems to have been one objective, but it was competing with other objectives and was certainly not considered to be number one priority. NAT claims that redundancy often causes accidents because of increased interactive complexity and opaqueness and it encourages risk taking. HRO claims that redundancy enhances safety because overlap can make “a reliable system out of unreliable parts”. James Reason (1997) agrees with Perrow that defence in depth, based upon redundancy and diversity, make the system more opaque to its operators, and hence allows the insidious build-up of latent conditions. Defences, barriers and safeguards add additional components and linkages. This not 3

only makes the system more complex, but subsystems can also fail catastrophically in their own right. The sheer fact that Titanic was the world’s largest liner, on her maiden voyage, furnished with state-of-the-art technology might have increased the complexity and opaqueness of the system. NAT claims that decentralised decision making is needed to handle complexity but centralisation is needed in tightly coupled systems. This contradiction makes it very difficult to organise for safety. HRO claims that a “culture of reliability” will enhance safety by encouraging uniform and appropriate responses by field level operators. In his interaction/coupling chart Perrow places the marine transport industry around “medium” on complexity and relatively tight on coupling. The need for both centralised and decentralised decision making, dependent on types of decisions, seems to be important for safety on a vessel like Titanic, according to both HRO and NAT. Today, and even more so around 1910, the management onboard a ship is organised for centralized decision-making, which might be good for planning but not for handling complexity. NAT claims that organisations cannot train for unimagined, highly dangerous operations. HRO claims that continuous operations, training and simulation can create and maintain high reliability operations. Research done on the influence training might have on safety in commercial aviation and at sea shows that training is an important factor for increased safety and can influence decision-making in critical phases (Flin, 1996; Helmreich, 1997). On the other hand if the situation is completely unimaginable, training might create stereotypical behaviour which prescribes the wrong medicine if the situation is misinterpreted. (Reason, 1990; Schaub, 1997). Safety training seemed not to be necessary on Titanic since she was the ship that could not sink (Behe, 1997). NAT claims that denial of responsibility, faulty reporting and reconstruction of history cripples learning efforts. HRO claims that trial and error learning from accidents can be effective, and can be supplemented by anticipations and simulations. This might be looked upon as two sides of the same coin. NAT claims that the data we use for our learning is biased, which seems to be correct, because of the organisational and professional culture, lack of training on how to report an accident,

and so on. (Hill et al., 1994). On the other hand even if there are problems with the validity of some of the information gathered, this might give important information which can be used for learning purposes (Senge, 1990). After the Titanic accident it seems that the shipping company White Star Line became obsessed with safety, and went through all their ships, and mainly the two Olympic Class liners, with safety in mind. 2.5 The Multi-Level Model Rasmussen (1997) created a risk and safety model which is cross disciplinary and considers risk management to be a control problem and serves to represent the control structure involving all levels of society. If we evaluate Rasmussen’s socio-technical system in relation to the Titanic, several areas of conflict that might lead to accidents can be identified. At government and regulator level The fast pace of change in the shipping industry around the year 1900, with fierce competition, more speed, larger ships, new routes and so on, seems to have been a much faster pace of change than that in management structure, regulations and legislation (Savage and Appelton, 1988; Senge 1990). At this level it seemed important to maintain a national shipping industry and therefore regulators did not push for “costly” safety measures. Legislation on a supranational level was non existent, and on the national level it was retroactive as legislation normally is. As a result of the sinking of Titanic the international Convention for the Safety of Life at Sea, better known as SOLAS, which covers a wide range of measures designed to improve the safety of shipping was adopted in 1914. At company level The fierce competition mentioned above led to changes in structure and ownership, with the international investor J.P. Morgan entering the market. The incentives for the decision-makers seem to be more short term financial and survival criteria, rather than long term safety impacts. Titanic’s owner was one of the world’s largest investment banks. On management level Design based on established practices might be inadequate during a period of rapid change. Titanic was to become the world’s largest ship, with the potential of the worlds largest disaster.

4

Quality assurance in the shipyard have been questioned and one reason mentioned for the sinking of Titanic was bad material, especially the rivets. Only twenty out of the 3,000,000 rivets used on Titanic have been tested, but they were found to contain too much slag. Bad workmanship is also mentioned. One way to look at these allegations is to compare Titanic with Olympic. The same men, with the same metal and the same rivets built Titanic and Olympic at the same yard, at the same time. Olympic crossed the Atlantic 500 times and earned the name “Old Reliable”. At the staff level It seems as there was inadequate guidance from the Captain to the staff, and, even worse, Captain Smith either let himself be pressured by Managing Director Ismay (White Star Line) or was by himself so competitive that he wanted to reach New York faster than the record Olympic held. During subsequent hearings, Ismay denied that he had any conversation with Smith where they compared Titanic’s performance to that of Olympic, and that Titanic should arrive in New York on Tuesday evening rather than Wednesday. But several of the surviving passengers had heard Ismay talking about reaching New York ahead of schedule and there were speculations later that the main reason for Ismay’s denial was that the company was afraid of law suits (Behe, 1997). 2.6 Migration towards the boundary Rasmussen (1997) claims that human behaviour in any work system is shaped by objectives and constraints which must be respected by the actors for work performance to be successful. Human behaviour will tend to migrate towards the boundary of acceptable performance. In this theory it is important to find the boundary where the organisation transfers from a safe to an unsafe state. We have to identify the constraints on the work system and the boundaries of acceptable operations. Rasmussen (1997) split accidents into three classes, characterised by their frequency and by the magnitude of loss connected to the individual accident. A major ship-accident is described as a medium size infrequent accident. The process in finding the boundary and drawing defence lines evolves from design improvements in response to analysis of the individual, latest major accident. The process is an incremental process toward improved safety.

It seems as the Titanic accident initiated activity towards safety at Government and Regulator level4 and at Company and Management level. We have mentioned earlier that the White Star Line became obsessed with safety. In Titanic’s case, before the accident, both the market and the management seemed to press for efficiency, and there were no defence systems based on “safety culture” which came into effect before the boundary of acceptable performance was reached. I find this model interesting because if we are able to find the “boundary of safe performance”, add on an error margin, and find measurable indicators for acceptable performance, then the whole process of safety decisions, targets and communication within the organisation becomes simpler and safer than if we try to control errors. The model visualises one of the problems that might occur. Added protection is traded off for improved productivity, which is called the theory of risk homeostasis (Wilde, 1982). The history of marine accidents is littered with “radar assisted accidents” (Sanquist et al. 1996) and on the roads there are already a lot of ABS brake assisted accidents (Status, 1994). 2.7 Risk homeostasis (Wilde, 1982) Most researchers agree that people adjust their behavior to compensate for the risk they perceive. People routinely behave more cautiously when they consider themselves at risk. People drive more slowly in the rain or snow. In the terminology of safety engineering this behavior is known as risk compensation (Filley, 1999). The debatable question is how much do they compensate. The theory of risk homeostasis predicts that people will dissipate roughly all the enhanced safety imposed upon them in other desirable risky activities. The increased safety on Titanic (the ship that could not sink with Marconi radio and watertight compartments) seems to be compensated by high speed and “risky” behavior in a situation when they got several ice warnings.

3. DISCUSSIONS AND CONCLUSIONS 3.1 Could the Titanic accident happen today? Does an organisation or a society learn everything it could about how best to handle a new accident in the future from an accident? Are organisations or society satisfied with a single-loop learning, where they only change the work procedures, or do they go for a double-loop learning where system design and 4

Adoption of the SOLAS convention in 1914 was a direct result of the Titanic accident.

5

culture are changed too? It seems that the Titanic accident changed both the work procedures and the system design. White Star Line management became obsessed with safety and as a result of the accident SOLAS was adopted and an International Ice Patrol was created to guard the North Atlantic sea lanes. Even if the Titanic accident and other accidents have led to improvements of safety, a large number of accidents, like Piper Alfa, Herald of Free Enterprise, Scandinavian Star and Estonia have happened in recent decades. In contrast with those who argue that we can learn from accidents (Senge, 1990; Reason, 1990; PatèCornell, 1993), Perrow (1994 b) claims that we are not going to learn much about the proximate causes of accidents in high risk systems from an examination of systems that have had accidents. Titanic was an accident waiting to happen. We do not look at systems in the same industry where no accidents have been experienced and conduct a thorough investigation to look for the causes of a hypothetical accident. Instead, in the absence of an accident, investigatory agencies often find what they were looking for: that everything is in order and an accident is virtually impossible (Perrow, 1994 b). Perrow (1984) argues that where we find complex interactions and tight couplings serious accidents are inevitable, no matter how hard we try to avoid them. Studies done on the Herald of Free Enterprise accident (Reason, 1990) and the Piper Alfa accident (Patè-Cornell, 1993) conclude that the accidents were a result of an accumulation of management errors, with a final event triggering the catastrophe. This is a “copy” of the Titanic accident. The answer to the question of whether the Titanic accident could happen today must therefore be yes, even if persons, organisations and society have learnt from accidents that have happened. 3.2 Could the Titanic accident have been foreseen? This question is very difficult to answer with a clear yes or no, even if we use all the theories and tools we have. From the event and condition analysis we can see that there certainly were latent errors waiting for a release, but on the other hand almost all organisations have latent errors up their sleeve. In a “former life”, as manager in an insurance company, I regularly discussed the quality of our present industrial- and marine- portefolio and potential new customers. During these discussions we often linked the safety culture in the company to management and employee attitudes, how the management and the employees behaved and what

the workplace or the ship looked like. Sometimes we drew the conclusion that it was not a question about if a major accident would happen, but when it would happen. We believed that even if we increased the normal price 4 to 5 times for some companies they would never pay enough. If I look back I must say that we were often right in our evaluation. The latent errors were obvious in many of the situations we discussed. The question on warnings and signals in advance is a difficult one, and foresight is indeed limited. A good illustration is Sagan’s (1993) reference to the safety glass where it is a matter of judgement whether the glass is half empty or half full. When accidents occur they are usually a surprise to the management and the public (Turner and Pidgeon, 1997), and the first reaction is that this is unbelievable, fantastic, and so on. With hindsight, it is nearly always possible to identify, prior to a disaster, the presence of warning signs which if heeded and acted upon, could have thwarted the accident sequence. The question that often arises after the event is: How could these warnings have been missed or ignored at the time? There are a number of possible reasons why this happens, but most of them have to do with the fact that after-the-fact observers, armed with “20/20 hindsight”, view events quite differently from the active participants who possessed only limited foresight. Knowing how events turned out, what psychologists have called outcome knowledge profoundly biases our judgement of the actions of those on the spot. Several studies (Reason, 1997) have shown that: people greatly overestimate what they would have known in foresight they also overestimate what they knew in foresight they misremember what they themselves knew in foresight. It seems that James Reason, Jens Rasmussen, Charles Perrow and Karl Weick agree that accidents might be the result of highly complex coincidences which could rarely be foreseen by the people involved. Perrow (1984) says that great events have small beginnings, and Weick (1987) says that to anticipate and forestall disasters is to understand regularities in the way small events can combine to have disproportionately large effects.

6

4. CLOSING REMARKS When studying books and material describing the Titanic disaster there are many pitfalls to be aware of. My main sources for this essay are the two enquiries from 1912, one conducted by the United States Senate and one conducted by a wreck commissioner appointed by the Lord Chancellor resulting in British Report on the Loss of the Titanic. Both reports seems to build on inadequate understanding and “models” of human behaviour, which is no surprise to me because at the time these enquiries were conducted the concepts of the human factor and organisational culture as reasons for accidents were not “invented”. The ones conducting the enquiries seemed most interested in guilt and the legal perspective and did not follow up clues with questions about persons or organisations. An improvement in recent years has been the development of independent organizations for accident inquiry, and these have been given a new mandate. The current objective of investigations is to elucidate matters deemed to be significant for the prevention of transport accidents, not to allocate blame and liability. 5. REFERENCES Argyris, C. & Schön D.A. 1978. Organizational Learning. Reading, Mass.: Addison-Wesley. Behe, G. 1997. Titanic: Safety, Speed and Sacrifice. Transportation Trails, Bird, F.Jr. 1969. Practical Loss Control Leadership. Loganville, GA: International Loss Control Institute. Douglas, M. & Wildavsky, A. 1983. Risk and Culture. University of California Press. Filley D. (1999) Risk Homeostasis and the Futility of Protecting People from Themselves.Independence Issue Paper No.1-99. Flin, R. 1996. Sitting in the Hot Seat. Chichester: John Wiley & Sons. Heinrich, H.W. 1931/1959. Industrial accident prevention - A scientific approach. New York: McGraw-Hill Book Company. Helmreich, R.L. & Merritt, A.C. 1998. Culture at Work in Aviation and Medicine: National, Organizational, and Professional Influences. Aldershot, Ashgate ,Hampshire, England. Hill, S.G.; Byers, J.C.; Rothblum, A. & Booth, R.L. 1994. Gathering and recording human-related data in marine and other accident investigations. In Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting Kristiansen, S. & Eide, S. 1994. Erfaringstilbakeføring: Dybdeanalyse av ulykker. Draft report MARINTEK A/S. Institute for Marine Projects NTH.

Kuntz, T. (ED) 1998. The Titanic Disaster Hearings. The official Transcripts of the 1912 Senate Investigation. New York: Pocket Books. Luxhøy, J.T. & Coit, D.W. 2006 Modelling Low Probability / High Consequence Events: An Aviation Safety Model. IEEE 1-4244-0008-2/06 Patè-Cornell, M.E. 1993. Learning from the Piper Alpha Accident: A Postmortem Analysis of Technical and Organizational Factors. Risk Analysis, Vol. 13, No. 2. Perrow, C. 1984. Normal accidents. Living with High-Risk Technologies. Basic Books. Perrow, C. 1994a. The Limits of Safety: The Enhancement of a Theory of Accidents. Journal of Contingencies and Crisis Management. Volume 2 Number 4. Perrow, C. 1994b. Accidents in High-Risk Systems. Technology Studies 1/1 Rasmusen J 1997. Risk Management in a dynamic society: A modelling problem. Safety Science Vol.27. No 2/3. Reason, J. 1990. Human Error. Cambridge University Press. Reason, J. 1997. Managing the Risks of Organizational Accidents. Ashgate Publishing Limited. Report on the loss of TITANIC. 1998. The Official Government Enquiry. 2nd edition. Great Britain, Alan Sutton Publishing. Roberts, K.H. 1990. Some Characteristics of One Type of High Reliability Organization. Organizational Sience 1 No2. Sagan Scott, D. 1993. The Limits of Safety. Organizations, Accidents and Nuclear Weapons. Princeton University Press, Princeton New Jersey. Sanquist, T; Lee, J.D.; McCallum, M.C. & Rothblaum, A.M. 1996. Evaluating Shipboard Automation: Application to Mariner Training, Certification and Equipment Design.. Paper for the National Transportation Safety Board Forum on Integrated Bridge Systems, May 6-7. Savage,C.M. and Appleton,D. (1988): CIM and Fifth Generation Management; In: Fifth Generation Management and Fifth Generation Technology. SME Blue Book Series,Dearborn,Michigan: Society of Manufacturing Engineers. Schaub, H. 1997. Decision making in complex situations: Cognitive and motivational limitations. In R. Flin, E. Salas, M. Strub and L. Martin (Eds.): Decision Making Under Stress. Ashgate Publishing Limited: 291-300 Senge, P.M. 1990. Den femte disiplin. Kunsten å skape den lærende organisasjon. Oslo: Hjemmets Bokforlag. Status 1994. What Antilocks Can Do; What they Cannot Do. Status January, Insurance Institute for Highway Safety, Arlington VA. Todd, R.; LaPorte, & Consolini, P.M. 1991. Working in Practice but Not in Theory: Theoretical Challenge of «High Reliability Organizations» Journal of Public Administration Research and Theory 1; No 1. Turner, B.A., Pidgeon, N.F., 1997. Man-made Disasters, 2nd Edition. Butterworth±Heinemann, London. Vaughan, D. (1996). The Challenger Launch Decision. Risky technology, Culture and Deviance at NASA. University of Chicago Press, Chicago. Weick, K.E. 1987. Organizational Culture as a Source of High Reliability. California Management Review Winter. Wilde, G.J.S. 1982. The theory of risk homeostasis: Implications for safety and health. Risk Analysis 2.

7