A Theory for Optimal Organization - Semantic Scholar

8 downloads 0 Views 66KB Size Report
Herewith, a new path to a theory-based design of optimal organizations has been opened up, the ..... rated by the data. A second approach ... perform a total of 120 roles (30 times 2 roles as a player and 2 as a critic). In addition, .... point, the dimensionality of the structure (x) can be calculated as a function of RI and the total ...
Discussion Paper

A Theory for Optimal Organization Prof. Dr. Markus Schwaninger

No 38 – November 2000

Abstract Even though the question as to whether optimal organizational structures exist has been asked repeatedly, organization theory has not been able to answer it in general form. Over time, several useful heuristic principles to direct the structuring of organizations have emerged. Moreover, powerful models, inspired by general systems theory and cybernetics, have been elaborated, which establish the preconditions for organizational viability. Optimization theory has provided solutions to specific organizational problems, most of them being of a heuristic nature. In this paper, a new approach is suggested for assessing and ultimately also for pursuing the optimality of organizational designs. It is proposed that the optimal fractal dimensionality of organizations is equivalent to that of living organisms. This hypothesis is submitted to and corroborated by a first test. Herewith, a new path to a theory-based design of optimal organizations has been opened up, the implications of which for the methodology of organization design may be substantial.

Contents 1. The question 2. Biology and organization theory 3. Optimization for organizations 4. Complexity and the multidimensional structure 5. The answer: Optimal dimensionality 6. Test of hypothesis 6.1. Setup 6.2. Structural architecture of Team Syntegrity 6.3. Calculus of the dimensionality of the Team Syntegrity architecture 7. Synopsis and Outlook

1 1 2 4 7 9 10 10 13 17

1. The question The search for optimal structures has been a pervasive issue throughout the history of organization theory. Over time, important insights into fundamental concepts of organizing have emerged; e.g. -

hierarchy as a strategy for coping with complexity (cf. Simon 1962);

-

differentiation and integration as complementary fundamentals for effective organizing (cf. Lawrence/Lorsch 1967);

-

autonomy and recursive structuring as architectural principles of viability (cf. Beer 1979,1981,1985).

These concepts represent invariant principles of organizational structuring. Hierarchy, differentiation and integration appear in most pertinent theories in one form or another. Autonomy has also become an important subject in the context of discussions about decentralization, democratization and empowerment. Recursion, i.e., the iterative application of the same principles or structures to different organizational levels for an organization to be viable, was introduced by Beer in the early seventies, and has only recently aroused wider interest. The question “Are there optimal structures for organizations?” has recurrently been posed in this general form. The answers have varied, but to date the tenor has in effect been: “No, there is no generally optimal structure. Probably there is an optimal structure for each organization. But we are still looking for a solid theory to ascertain it.”

2. Biology and organization theory A look beyond the boundaries of the social sciences opens up a new perspective to this discussion. Biology has evolved new insights into the optimal structures of living organisms. It may be assumed that organization theorists can learn from them. Systems theorists and cyberneticians have traditionally leveraged knowledge originating from biology for the domain of social systems (e.g., Miller, Pask and Beer). Also, several of the founding fathers of Systems Theory were trained biologists: The Society for General Systems Research was founded in 1954 by Ludwig von Bertalanffy, Ralph Gerard (both biologists), Anatol Rapoport (a mathematician, who had applied mathematics to biology) and Kenneth Boulding (an economist). Rashevsky and Rapoport spearheaded 1

the application of mathematical modeling to both biological, and social relationships. Finally, General Systems Theory has also identified structural similarities which extend beyond mere comparisons of the social and biological domains. These also include analogous situations, e.g. in chemistry and physics, where such analogies have also been formalized mathematically, in the sense that practically the same mathematical model may be employed to express theories widely disparate in content (cf. Rapoport 1986, 1960). Many of the loans from biology taken by social scientists have been at the level of analogy, - stimulating thought, but not leading all the way to rigorously formulated theory. There have been exceptions, at least two of them notorious: 1.) James Grier Miller’s Living Systems Theory (LST): Miller identified 20 subsystems which make up a living system, and which are invariant across a wide spectrum of organized wholes, from the living cell to a society (Miller 1978). LST originated in the biological studies of Miller, who by training is a medical doctor. 2.) Stafford Beer’s Viable Systems Model (VSM): Beer discerned a set of control functions, and their interrelationships, which are the necessary and sufficient preconditions for the viability of any human or social system. This model is based on an isomorphic mapping originating from studies of the structure of the human central nervous system. A crucial feature of the VSM is that it introduces the notion of recursiveness, in that the viability of social systems is grounded in the recursive existence and functioning of the system of (self-) control it specifies (Beer 1981). Both of these models are outstanding in their originality. They have also triggered numerous studies and applications in the realms of organization, society, and engineering. They both address the preconditions for viability, but not the issue of the optimality of structures. 3. Optimization for organizations In the past, the question of organizational optimality has been considered discussible only in very specific and therefore limited contexts. The more famous examples are linked with the optimization of organizational processes; e.g.: 1.) Routing problems: A classical one is the “travelling salesman” problem, which poses the question of an optimal passage for an agent (e.g., a truck) which has to cover the routes between a number of destinations (e.g., cities) at minimal costs. More complex cases are resource allocation problems, which involve the routing of multi2

ple resources (e.g., vehicles) to provide services for a large number of clients distributed over a territory (e.g., handicapped persons in a city, to be transported to centres of care within certain temporal bounds). The solution for this kind of problem is mainly obtained by heuristic methods, local search and heuristic meta-strategies (genetic algorithms, ant colonies, tabu search, etc.) in particular. The exact methods available for the same purpose are of the mathematical programming type, specifically of the mixed-integer programming as well as the branch-and-cut types. 2.) Control problems: These problems typically require optimal control gains for parameters which lead to the stabilization of dynamical, feedback-driven systems (e.g., in a production system, optimal rates of adjustment of labor or material or financial input to optimized cycle time, quality or productivity). The solutions for these kinds of problem revert to systems of differential equations respectively dynamic programming with continuous time or methods of „Fuzzy Control“. They are efficient, even though they can only produce near-optimum solutions. Applications of this type have resulted in substantial improvements as far as the economic use of scarce resources is concerned. The types of problems that call for an optimization of enduring structures, which have been studied in terms of optimization, are different. Their reputation is less spectacular, which is due to their shorter tradition. 1.) Single-objective optimization: Under this title the classical types of optimization can be subsumed. A typical case is the question of the optimal span of control in a hierarchy of agents with largely uniform tasks (e.g., the optimal number of salespersons in service centres in a market). The applied method is that of static optimization1, either with one or several levels (in the latter case, hierarchical optimization). The optimization can calibrate one or several parameters optimally (e.g., number and size of the service centres). 2.) Multi-objective optimization: Large and complex logistic systems may call for “multiple-issue” solutions which allow for an overall optimum, taking into account different objectives at the same time (e.g., a postal system where cost, time and ecological performance may be the pre-eminent criteria, and where a structure of distributive centres with several levels may be considered). Solutions for this type of problem involve multi-objective, as well as multi-parameter and multi-level optimization in a combined mode. 1

The method of dynamic optimization is one where an algorithm helps to minimize the cost (e.g.’ of an inventory) or maximize some kind of performance (e.g., cycle time, profit or revenue) along a chain of activities. 3

The superiority of such solutions can be enormous, in comparison to the status quo of an operation, and also in relation to the pragmatic solutions taken into account. For example, a project for the design of the new postal distribution system was undertaken by the Swiss Postal Corporation (POST) in collaboration with the Institute of Operations Research (IfU) at the University of St. Gallen. Technically, the problem was to ascertain an optimal structure for a distribution system with three levels, by solving a complex mixed-integer-optimization-problem (cf. Klose 2000). The leverage achieved by the pertinent study amounted to an efficiency gain of roughly 13%. On the basis of conservative assumptions, the payback of this project can be estimated at about 18 days. It must be noted that only a subset of the optimization techniques can provide “optimal solutions” in the strict sense of the word. Most of the more complex solutions mentioned are numerical and only approximate optimality. In other words, it is known that these solutions are close, often very close to the theoretical optimum, although that theoretical optimum remains unknown. The exact boundary between the cases with “optimum” solutions and those with “close to optimum” solutions is given by the theoretical limit of feasible computation, which establishes the physical limits of analysis. Essentially, the use of heuristic methods is motivated by the question as to whether the decision problems which are part of these optimization problems are “NP-complete”2. This means, to put it in a nutshell, that an exact solution of a given optimization problem is possible if one can construct a polynomial algorithm which can solve all the decision problems of the complexity class in focus (cf. Garey/ Johnson 1979, Wegener 1993). 4. Complexity and the multidimensional structure In principle, optimization can be applied to multi-dimensional problems, as has been said: Multi-objective optimization is a case in point. When we talk about “multidimensional organization” we mean something different: We indicate the simultaneous structuring of an organization according to different structuring criteria. Organization theory has focussed on ideal-types of structures: •

one-dimensional organizations: structured according to the criterion of functions (e.g., R+D, production, marketing, etc.) or object (products, markets, geographically defined units, projects) - the typical hierarchical “line organization”.



two-dimensional organizations: structured according to two of these criteria at the same time (also called “matrix organizations”)

2

NP stands for nondeterministic polynomial time. 4



multi-dimensional organizations: structured according to three or more of these criteria at the same time (also called “tensor organizations”).

There has also been a lot of discussion about intermediate structural arrangements, which establish certain dimensions in a weakened form. For example, the euphoria with which the matrix organization was advocated in the seventies (e.g. Davis/ Lawrence 1977) was dampened by the subsequent disfunctionalities. People operating in the matrix mode suffered from lack of clarity and conflict situations The essential conclusion drawn was that even in organizations with two or more dimensions, there had to be clarity a) about who was an individual’s boss, authorised to admit, lead, sanction and possibly dismiss him or her; b) about the order of priorities and the delimitation of competencies between the different dimensions. This new consensus about multidimensional structural arrangements has in fact led the way up till the present moment. The environments within which organizations function have become increasingly complex. Organizations have made efforts to enhance their repertories of behavior, therewith striving to conform to Ashby’s Law of Requisite Variety3 (Ashby 1964). Three developments are prominent: 1.) The traditionally one-dimensional structures have increasingly been replaced by higher-dimensional organizational arrangements: Hierarchy 4, where the location of control is always at the one and onlyapex, has given way to heterarchical5 structures. Heterarchy denotes an organizational architecture in which different instances of the organization hold authority concerning different matters.For instance, unit A of a firm may hold the overall responsibility for a certain product group and for a specific region, while unit B is responsible for a different product group, etc. (cf. Schwaninger 1996, 2000a). 2.) Centralized intelligence and control have given way to distributed intelligence and control. “Distributed intelligence and control” is an embodiment of the organizational principle of recursiveness as set up by the theory of the Viable System Model (see above): The higher functions of the organizational meta-system (i.e., overall opera-

3

„Variety“ is a technical term for „complexity“, which denotes the number of potential states or modes of behavior of a system. Ashby’s Law - „Only variety can destroy ... variety“ - implies that a system can only be viable if its variety matches its environment’s variety. 4 Hierarchy, from the Greek ‚hieros‘ - holy, and ‚archein‘ - to rule. 5 Heterarchy, from the Greek ‚heteros‘ - different, and ‚archein‘ - to rule. 5

tive, strategic and normative management) are necessary prerequisites of viability at any organizational level. 3.) Static structures have lost their predominance, being increasingly complemented by dynamic, changeable, fluid arrangements. The basic structure continues to rest on a relatively durable principle, - e.g. an architecture of a firm made up of business units (even though the number and activities of these units may change) or the structure of an alliance (even if its members may vary over time). But that (relatively) constant component is more and more united to a variable component. The pertinent keyword is “virtual organization”. The attribute “virtual” derives from the Latin ‘virtus’ - virtue. In this case, the virtue resides in the potential of realizing an almost infinite number of states by combining a finite quantity of resources in a manner which is customized to the situation confronted. As situations shift over time, the organizational arrangements also have to be modified or reconfigured. Project organization, which is by definition targeted at mastering situations with a high degree of non-routine (innovative) tasks and a limited time span, is therefore the prototype of the organization form to deal with this kind of challenge. Therefore it is not surprising that the concept of “virtual organization” stems from the world of project organization. To the best of my knowledge, it was first used at DEC (Digital Equipment), where operating in heterogeneous information technology landscapes effectively was achieved via flexible formations of task forces (cf. Savage 1990). This mode of organizing has been adopted by many private and public firms, not only those of the industrial type, but also and increasingly by complex service organizations. It has reshaped the landscape of production, marketing and R&D. An example is the “virtual factories”, - cooperations of partner firms which combine their forces to meet the demands of the market in flexible and innovative ways (cf. Schuh/ Millarg/ Göransson 1998). Recently, the logic of the virtual organizations has also been conceptually reconciled with the principles of structuring for viability (Schwaninger 1994, Schwaninger/ Friedli 2001). To sum up, multidimensional structure is a powerful answer to the challenges of complexity faced by modern organizations. From an abstract point of view, virtuality is only one additional dimension, - a mode of structure varying over time. Even though multidimensionality has contributed substantially to absorbing complexity and therewith to organizations’ abiding by Ashby’s law, the question of an optimal degree of dimensionality remains open. In fact, it has barely been addressed. 6

5. The answer: Optimal dimensionality If we ask “What dimensionality is optimal?”, organization theory itself cannot provide us with satisfactory answers. Again, we have to take recourse to the natural sciences, and once more biology appears to be a reliable source of knowledge. Recent biological studies teach us that living organisms (plants, animals, humans) are structured in a fractal mode: Their metabolism, breathing, blood circulation and other vital functions are optimized by these fractal structures. The following is mainly based on Sernetz (2000). In fact, this “organizational strategy” of living organisms entails their functional superiority to man-made bio-reactors. These require continuous stirring to bring about the turbulence of the liquid necessary for higher efficiency of the chemical reaction, only to be brought about by high energy input. The organism of an animal or human produces catalytic processes similar to those of a bio-reactor. However, there is no need for stirring; all the pump-function of the heart induces is a laminar blood flow, enabled by a much lower energy level. Even so, the mixture of the liquid phase (blood) and the stationary phase (tissue) are achieved in an optimal way, with a result that a continuously stirred tank reactor (CSTR) could only mimic at enormously high cost. This stupendous superiority of the natural “reactor” is largely due to the fractal structure of the living organism. A fractal structure is one where the principle of organization is applied in an iterative mode. Consequently the parts show the same structure as the whole; the organization is self-similar. We have met this structural principle earlier in this paper, under the keyword “recursiveness”. Fractal structure is only a special case of recursive structure. However, the term “fractal” adverts to a specific notion: It derives from the Latin participle “fractus” - broken, which in this case denotes that the dimensionality of the object under study is usually to be expressed by a broken number, not by 1,2 or 3. A dimensionality of one would be applicable to a line; two corresponds to an area and three to a sphere. Self-similar aspects cannot be described adequately by means of the classical measures, - length, surface and volume. For example, the curve of Koch’s snow flake, a classical fractal, has a finite surface but an infinite length. Also, the mathematical idealization of a tree of blood vessels with infinitely thin branches has a volume of zero although it reaches every single point inside the organ it feeds (Sernetz 2000).

7

The dimension of a fractal can be ascertained by determining its conventional measures by means of increasingly fine yardsticks, and by ascertaining how much the measures grow as a function of this refinement. Real biological objects can be measured in this way, whereby their fractal dimension can be determined. Biometric studies have ascertained that the metabolic activity of living organisms follow a law of power, expressed by the following formula, which has been derived from empirical evidence (after Sernetz/ Justen/ Jestczemski 1996, Jestczemski/ Bolterauer/ Sernetz 1997, Sernetz 1993): M = LD, where 2,2 > D > 2,3.

(1)

M is the metabolic activity, measured as the organism’s throughput in Joule per second, and L is the length of the organism. D is the fractal dimensionality of the organism. On the basis of measures of multiple species, Sernetz and his team at the University of Giessen ascertained an allometric exponent of b=0,74, measuring the progression of throughput in Joule per second, as a function of body volume in litres. Expressed by the length instead of the volume, the applicable exponent is D=3b=2,22, which denotes the growth of the metabolic rate as a function of the length of an organism. On the assumption that the extant living organisms are optimally structured, Sernetz concludes: An optimally built organism must be 2,22-dimensional (Sernetz 2000). In other words, according to this theory, for optimally built organisms the law of power must be: M = L2,22.

(2)

Recall that Beer with his Viable System Model discovered and formulated the principle of recursive structure as the key to effective coping with complexity for organizations, and that this model is based on insights into the working of living organisms. It must be emphasized that the VSM has shown extraordinary power as a diagnostic model, and that to date it has not been refuted. If we continue taking the functioning of living organisms as a role-model for social organizations, we may be on the threshold of a further development of organizational theory. We know that social organizations, in terms of communication and information, show properties which are in a certain sense identical to those of living organisms. Even though there are fundamental differences - social organizations are constituted by autonomous agents with their own goals, preferences and values (Ackoff/Gharajedaghi 1984) - they resemble living organisms to a high degree. If we take the internal transfer of knowledge as a case in point, we can suppose that this is, in principle, the most im-

8

portant mechanism of adaptation. This can at least be derived from studies made during the last decade (cf. Henschel 2001 and literature quoted therein). Insofar as social organisms obey the same laws of functioning as living organisms, the empirical law quoted above can be used as a benchmark for the optimal design of organizations. If the structural laws governing the behavior of social organizations and living organisms are the same, and there is a large body of evidence indicating that they are, then we can make use of a powerful isomorphy (i.e., structural invariance): Similarly to an optimally built organism, an optimally conceived organization shows a dimensionality of about 2,2 to 2,3. This law should hold independent of size, sector or activity, or other situational factors (context). Small exceptions are conceivable: for example, in the case of an enduringly constant environment, a lower dimensionality would be thinkable as optimal, but probably only in economic terms. Note, however that optimization in one dimension only is in principle problematic in complex systems: Whenever one maximizes one variable in a complex system, the likelihood increases that bottlenecks will spread and that consequently non-stable or chaotic behavior will ensue (cf. Adam 1985). 6. Test of hypothesis The hypothesis to be tested is that: An optimally conceived organization shows a dimensionality of about 2,2 to 2,3. There are several ways of testing this hypothesis. The usual one would be to proceed with a comparison, scrutinizing a number of organizations in similar situations, measuring a) their performance and b) their organizational dimensionality, and subsequently examining whether the structure-performance link implied by this hypothesis is corroborated by the data. A second approach would be to test whether a structural arrangement already proven or at least considered optimal is in accordance with the hypothesis. Even though the first test would be the stronger option, initially the second one is to be carried out, as it involves a lower cost.

9

6.1. Setup The setup will provide for examining a theory for an optimal structuring of interactions and communications in large groups, - the Team Syntegrity model (TSM). This is a structural framework to foster cohesion and synergy in larger groups of individuals, or to encourage the transformation of mere aggregates of individuals with similar interests into organizations with their own identities. Invented by Stafford Beer (1994), TSM is a progressive design for democratic management in the sense of the heterarchicalparticipative type of organization (cf Schwaninger 1996). The TSM is a holographic model for organizing processes of communication in a non-hierarchical fashion that can be shown to be mathematically optimal for the (self-) management of social systems. Based on the structure of polyhedra, it is especially suitable for realizing team-oriented structures, and for supporting processes of planning, knowledge generation and innovation in turbulent environments. In the following, I shall illustrate the architecture of the model by using the geometry of an icosahedron, which is one of the structures commonly used to organize syntegration events, - in this case with 30 participants. The formation of networks by persons who are connected by mutual interests is a manifestation of the information/knowledge society and a structural answer to challenges of our time. An infoset is a set of individuals who share a common concern and who are in possession of pertinent information or knowledge connected with the issues of concern, and who have the will (and most likely also the enthusiasm) to tackle these. The Team Syntegrity model supplies the structural framework for the synergetic interaction of an infoset which is intended to lead to an integration of multiple topics and perspectives towards a shared body of knowledge. The term Syntegrity results from a combination of synergy and tensile integrity. We speak of synergy when the interaction or co-operation of two or more agents produces a combined effect greater than the sum of their individual efforts. Tensile integrity is the structural strength provided by tension, as opposed to compression (Fuller/Applewhite1982).

6.2. Structural architecture of Team Syntegrity An infoset of 30 persons, for example, can organize itself according to the structure of an icosahedron, the most complex of the regular, convex polyhedra (Figure 16), whereas for smaller gatherings, structures based on other polyhedra are possible. Each member of the a 30 member infoset is represented by one edge on the icosahedron. Each vertex stands for a team of five players (-> five edges) working on one topic; in an icosahedron there are 12 vertices that would be marked by different colours in a Synte-

10

gration. Therefore, given the geometry, each participant as a player/actor is connected by his/her edge to two different teams. Ms. Red-Yellow, for instance, belongs to the teams (vertices) Red and Yellow. At the same time, she acts as a critic to two other teams (for example, Black and Silver, which are her immediate neighbours). This means that each team consists of 5 players and 5 critics. Altogether, the thirty agents perform a total of 120 roles (30 times 2 roles as a player and 2 as a critic). In addition, there is the observer role, which will be explained in a moment. This structure resolves the paradox of peripherality versus centrality of actors in an organization (as formalized by Bavelas 1952): While peripherality leads to communication pathologies, alienation and low morale, centrality is needed for effective action. However, as a group grows, centrality can only be „purchased” at the cost of increasing peripherality (Leonard 1994). Team Syntegrity enables an Infoset to acquire „centrality“ via a reverberative process (each team will meet more than once), although the peripherality of each one of its members equals zero, i.e., there is no peripherality at all.

6

Kindly made available by TSI – Team Syntegrity Inc., Toronto, Canada. 11

Figure 1: Icosahedral structure of the Team Syntegrity model

In a nutshell, the structure of Team Syntegrity operates the following process: After the phases of initialization and joint design of the agenda around the common subject of interest, the 12 individual teams (consisting of 5 players and 5 critics each) explore their respective topic. Each team meets several (usually three) times and writes up a summary of its results to share with the whole infoset. Discussions evolve as follows: The discussions are developed in a parallel mode with two teams working at a time. This means that 20 of the 30 members of the infoset are involved in these discussions. The remaining 10 can attend any one of the meetings as observers, in order to complement 12

the views derived from their activities as players and critics in their respective, individual set of 4 teams. They may also use some of that time for lateral conversations with other “idle” members or simply relax. The fact that the same issue with its different but interconnected aspects is continually processed by the same set of people, who gather in alternating compositions (topic teams) implies strong reverberation and leads to a self-organizing process with high levels of knowledge integration. There is no centre required to integrate the individual efforts; integration just occurs of its own accord. It can be shown mathematically that this is a geometrically ergodic process, in which the eigenvalue 7 of the process converges to a minimum: Ninety percent of the information in the system will be shared after three iterations, and ninety-six after four iterations (Jalali 1994: 277).

6.3. Calculus of the dimensionality of the Team Syntegrity architecture In the following, a rough calculus of the dimensionality of an infoset as structured by the icosahedral architecture of Team Syntegrity ensues: RI = RT + RC ,

(1)

where RI denotes the total set of actual relationships of the members of the infoset. Its components are RT, - the relationships at the team level, and RC , - the complementary relationships of the observers. RT is the composite of the relationships within the twelve teams (t). ni expresses the number of team members of team i, the ideal number of members being five players (p) plus five critics (c) for all teams. The number of relationships between a pair of members is denoted by (m). t

Rt = ∑ i =1

m ⋅ ni (ni − 1) , ni = pi + c i . 2

(2)

For this ideal case, with reciprocal relationships between each pair of members, i.e. (m=2), RT amounts to: RT =12C2C10C9C1/2=1’080 RC is the composite of the relationships between the members of the discussing teams (b) and the members of the team they observe. The arrangement is that, while team discussions are going on, some of the observers relax8 whereas others switch from 7 8

The formula to calculate the eigenvalue is: y=(1 / 5 )n, with n denoting the number of iterations. Such relaxation is essential to keep the vigor, concentration and involvement of participants high. 13

team to team, visiting both sessions going on at the time. Switches can also be made between the iterations of the team discussions, i.e., an observer could distribute his activities between the three iterations: For example, in the three iterations of the parallel discussions of teams A and B he or she could observe team A in the first iteration, relax in the second, and observe team B in the third iteration. To take account of these aspects, some assumptions must be made explicit to arrive at a first, rough calculation. We establish a parameter f denoting the average percentage of the total number of observers b (ideally 10) which are actively observing teams during a given pair of sessions. Furthermore, we introduce a parameter s which expresses the average fraction of those active observers who switch between teams. Based on the many Syntegration events realized to date, including those accompanied by the author9, the assumption of f=1/2 and s=1/3 appears to be realistic for a rough approximation of an idealized Syntegration10. Consequently, the following formula can be applied: RC =

12

∑ b ⋅ n ⋅ f ⋅ (1 + s ) .

(3)

t =1

For this ideal-type we get: RC = 12C10C10C1/2(1+1/3) = 800 Adding up RT and RC leads to: RI = 1'080 + 800 = 1'880 In other words, the set of relationships of an icosahederal infoset totals 1’880. At this point, the dimensionality of the structure (x) can be calculated as a function of RI and the total number of the members of the infoset (N)11: RI = N x .

(4)

To solve this equation the following transformations are necessary: 9

To date approximately 140 Syntegrations have been realized, despite the relative recency of the model. The author has directed or co-directed several, among them the first worldwide electronic Syntegration (cf. Espejo/ Schwaninger 1998) and accompanied many more, within the framework of a research association with Stafford Beer, the creator of the model, and TSI-Team Syntegrity Inc., Toronto, the organization which makes Team Syntegrity available to organizations. 10 The assumptions made explicit here try to capture a structure which enables an „optimal“ flow of information, taking into account the psycho-physically limited resilience of participants. Variations of the parameters f and s as a function of the situation at hand should also be considered (see below, under 6.5). 11 Equation (4) is isomorphic with equation (1) in section 5. Therefore, N is formally identical with the L, and RI with the M, in the latter. In other words, an isomorphic correspondence between Length and Number of members of an Infoset, as well as between Metabolic Rate and Number of Actual Relationships between Infoset Members is assumed. 14

log RI = x log N ,

x=

log RI . log N

(5)

With a total set of relationship (RI) of 1'880 and a total number of infoset members (N) of 30, the result is: x=

log1'880 = 2, 21658 . log 30

In other words, the dimensionality of the icosahedral architecture of Team Syntegrity is 2,21658. In sum, the working hypothesis formulated above is strongly corroborated. The surprising fact is that this size of x is very close to the optimal dimensionality observed in biological organisms. It is actually closer to 2,22 than originally expected (cf. hypothesis above).

6.4 The revised theorem In the light of these results, the hypothesis formulated above can be slightly revised, in the sense of proposing the following Theorem for an Optimal Structure of Organizations:

An optimal organization structure shows a dimensionality of approximately 2,22.

6.5 Discussion I am aware that this proposition is bold, because it has not been scrutinized in every possible way. On the other hand, the advantage of this formulation is that it conforms to Popper’s principle of falsifiability. In principle, this Theorem for an Optimal Structure of Organizations provides a powerful conceptual instrument to establish whether the dimensionality of a structure is too high or too low. The benefit lies in avoiding potentially huge costs and a host of disfunctionalities, - not only economic but also social and ecological ones. However, the theorem also prompts questions. One major question that emerges is, how general this theorem is. Does not contingency theory postulate that organizational structures are and should be a function of the contexts they face? According to contingency theory, placid environments require and induce less complex structures than turbulent ones (cf. Lawrence/ Lorsch 1967, Thompson 1967). The answer is straightfor15

ward: The theorem proposed here defines optimality in terms of contexts similar to those faced by living biological organisms. These are always confronted with complex, turbulent environments, at least potentially. Also, in the social domain potential complexity and turbulence are ubiquitous. Team Syntegrity, the reference model used for the test above, is definitely a model to be recommended for dealing with complex issues, but it would not be advisable for the structuring of a mere routine task. In addition,coping with that kind of task would most probably not require an organization of a dimensionality of 2,22. However, routine tasks are usually part of more encompassing organizations, which in the end strive for viability and development (Schwaninger 2000b). As a whole, these organizations are in principle exposed to high complexity, at least potentially. Further research should explore the possible limits of this theorem. Admittedly, a limitation of this paper is one of extension: - therefore,not all the practical implications which are already discernible at this point can be treated in detail. For example, this first test has been confined to one organizational model, albeit under consideration of multiple modalities of its use. Other models for organizational structuring, which cannot be examined here, should be studied in the light of this Theorem for an Optimal Structure of Organizations. Also, the test applied here,, has essentially been realized in a deductive mode. In addition, empirical tests of the type mentioned at the beginning of Section 6 should be carried out in the future. Furthermore, variants of the assumptions underlying formula (3) of the calculus should be considered 12. For example, possible trade-offs between parameters f and s should be studied. Finally, a great deal could be gained by improving and fine-tuning organizational models and methodologies, - Team Syntegrity being one of them -, in the light of this theorem.

12

Empirical studies will – ceteris paribus (all other factors being equal) - show different values for f and s depending on the circumstances of the respective Syntegration event: For example, a Syntegration with obligatory participation and limited commitment of participants tends to exhibit lower values for f and for s, than the ones chosen in the calculus above. The opposite - higher values for f and for s - will tendentially be the case in a Syntegration of a group of people tackling a difficult issue all of them are highly committed to. 16

7. Synopsis and Outlook This paper has addressed the question of the optimality of organizational structures. First, the quest for optimality has been traced as observable throughout the endeavours of management science and organization theory. It has been shown that the pertinent research has come up with methods to identify optimal, or mostly close to optimal, solutions for specific problems. Theory has also provided models which define necessary prerequisites for viability (in the case of Living Systems Theory) and even sufficient structural preconditions for viability (in the case of the Viable Systems Model). However, the established body of knowledge has not furnished a general theorem which would establish a norm for the dimensionality of optimally designed organizations. The biological theory of fractal organismic structures has empirically ascertained the dimensionality of living organisms, which can be assumed to be optimal. Building on this new body of knowledge, a theorem for the design of optimal organizations has been proposed here. Also, a first test of that proposition has been undertaken. The results suggest that the theorem is surprisingly accurate. In addition to further testing of the proposition, follow-up research should address several important questions, two of which shall be pointed out here. The first question is what are operational measures of fractal dimensionalities, and how can they be achieved? The second question is: To what degree can the optimal dimensionality vary as a function of the properties of an organization, such as the cohesiveness or diversity of goals, values and preferences of its members? In sum, this Theorem for an Optimal Structure of Organizations is applicable to all kinds of social organisms, be they private firms, public organizations or social initiatives, etc. It opens up new prospects of a more rigorous assessment of models of structure proposed by theories of organization. But it also enables a better-founded evaluation of concrete structuring options, as well as a theory-based design and implementation of structural models in the practical world. In both contexts, this theorem offers a benchmark by means of which obsolete fads and fashions can be exposed, and disfunctional propositions refuted. Finally, it must be emphasized that this theorem sheds new light on structural issues of the design of the structures by which a society governs itself: The political system, the “state”, i.e., government, and the public sector in general can benefit from it.

17

Acknowledgements The author extends his thanks to Mr. Heiko Eckert for making him aware of Sernetz’ work and to Mr. Peter Vrhovec for reading drafts of this paper as well as for his valuable and encouraging comments. He is also very grateful to Dr. Andreas Klose for reading parts of this paper and giving most helpful technical advice.

References Ackoff, R.L./ Gharajedaghi J. (1984). Mechanisms, Organisms and Human Systems, in: Strategic Management Journal, Vol. 5, pp. 289-300 Adam, A. (1985). Modellplatonismus und Computerorakel, in: Wilhelm Bühler, et al., eds., Die ganzheitlich-verstehende Betrachtung der sozialen Leistungsordnung, Festschrift Josef Kolbinger, Wien/New York: Springer Ashby, W.R. (1964). An Introduction to Cybernetics. London: Methuen Bavelas, A. (1952). Communication Patterns in Problem Groups. In Cybernetics: Transactions of the Eighth Conference, New York: Josiah Macy Jr. Foundation Beer, S. (1979). The Heart of Enterprise, Chichester etc.: Wiley Beer, S. (1981). Brain of the Firm, 2nd ed., Chichester etc.: Wiley Beer, S. (1985). Diagnosing the System for Organizations, Chichester etc.: Wiley Beer, S. (1994). Beyond Dispute. The Invention of Team Syntegrity, Chichester etc.: Wiley Davis, S.M./ Lawrence, P.R. (1977). Matrix, Reading, Mass. etc.: Addison-Wesley Fuller, R.B./ Applewhite, E.J. (1982). Synergetics: Explorations in the Geometry of Thinking, New York/London: Macmillan/Collier Garey, M. R./ Johnson, D. S (1979). Computers and Intractibility: A Guide to the Theory of NP-Completeness. Freeman, New York Henschel, A. (2001). Communities of Practice. Plattform für individuelles und kollektives Lernen sowie den Wissenstransfer, Doctoral Dissertation, Universität St. Gallen, forthcoming Jalali, A. (1994). Reverberating Networks. Modelling Information Propagation in Syntegration by Spectral Analysis. In Beer, S., Beyond Dispute. The Invention of Team Syntegrity, Chichester etc.: Wiley, pp. 263-281 Jestczemski, F./ Bolterauer, H. / Sernetz, M. (1997). Comparison of the Surface Dimension and the Mass Dimension of Blood Vessel Systems, in: Miyazima, S., ed., The Future of Fractals, Singapore etc.: World Scientific, pp. 11-20 18

Klose, A. (2000). Standortplanung in distributiven Systemen. Modelle, Methoden, Anwendungen, Habilitationsschrift, Universitaet St. Gallen, Switzerland Lawrence, P.R./ Lorsch, J.W. (1967). Organization and Environment: Managing Differentiation and Integration, Boston: Harvard University Press Leonard, A. (1994). Team Syntegrity: Planning for Action in the Next Century. In: Brady, B./ Peeno, L., eds., Proceedings, Conference of the International Society for the Systems Sciences, Louisville, Kentucky, pp. 1065-1072 Miller, J.G. (1978). Living Systems, Niwot, Colorado: University Press of Colorado (1995 reprint) Rapoport, A. (1960). Fights, Games and Debates, Ann Arbor: University of Michigan Press (5th printing 1974) Rapoport, A. (1986). General System Theory, Essential Concepts and Applications, Tunbridge Wells, Kent /Cambridge, Mass.: Abacus Press Savage, Ch. (1990). Fifth Generation Management, Burlington, MA: Digital Press Schuh, G. / Millarg, K. / Göransson, Å. (1998). Virtuelle Fabrik - Neue Marktchancen durch dynamische Netzwerke. München / Wien: Hanser. Schwaninger, M. (1994). Die intelligente Organisation als lebensfähige Heterarchie, Diskussionsbeiträge des Instituts für Betriebswirtschaft an der Hochschule St. Gallen, Nr. 14 Schwaninger, M. (1996). Structures for Intelligent Organizations, Discussion Papers, Institute of Management, University of St. Gallen, Switzerland, No. 20 Schwaninger, M. (2000a). Distributed Control in Social Systems, in: Parra-Luna, Francisco, ed.: The Performance of Social Systems, New York etc.: Kluwer Academic/Plenum Publishers, pp. 147-173 Schwaninger, M. (2000b). Managing Complexity - The Path Toward Intelligent Organizations, in: Systemic Practice and Action Research, Vol. 13, No. 2, pp. 207-241 Schwaninger, M./ Friedli, Th. (2001). Virtuelle Organisationen als Lebensfähige Systeme, in: Scholz, Ch., Hrsg., Virtualisierung, Berlin: Duncker und Humblot, in press Sernetz, M. (1994). Organisms as Open Systems, in: Nonnenmacher, T.F./ Losa, G.A./ Weibel, E.R.,eds., Fractals in Biology and Medicine, Basel etc.: Birkhäuser, pp. 232-239 Sernetz, M. (2000). Die fraktale Geometrie des Lebendigen, in: Spektrum der Wissenschaft, July 2000, pp. 72-79 Sernetz, M./ Justen, M./ Jestczemski, F. (1996). Dispersive Fractal Characterization of Kidney Arteries by Three-Dimensional Mass-Radius-Analysis, in: Evertsz, C.J.G./ Peitgen, H.-O./ Voss, R.F., eds., Fractal Geometry and Analysis. The Mandelbrot Festschrift, Curaçao 1995, Singapore etc.: World Scientific, pp. 475-487 Simon, H.A. (1962). The Architecture of Complexity, in: Proceedings of the American Philosophical Society, Vol. 106 (December 1962; reprinted in: Simon, H.A.: The Sciences of the Artificial, second edition, Cambridge, Mass./London: MIT Press, 1981, pp. 192-229 19

Thompson, J.D. (1967). Organizations in Action, New York etc.: McGraw-Hill Wegener, I. (1993). Theoretische Informatik. Eine algorithmenorientierte Einführung, Stuttgart: Teubner

20