Intransitive Noninterference in Nondeterministic ... - Semantic Scholar

3 downloads 3033 Views 369KB Size Report
security, a semantics for intransitive information-flow policies in deterministic ..... L and the high-level (classified) domain H. This is in part because even this ...
Intransitive Noninterference in Nondeterministic Systems Kai Engelhardt, Ron van der Meyden

Chenyi Zhang

CSE, UNSW, Sydney, NSW, Australia {kaie,meyden}@cse.unsw.edu.au

ITEE, UQ, Brisbane, QLD, Australia [email protected]

Abstract—This paper addresses the question of how TAsecurity, a semantics for intransitive information-flow policies in deterministic systems, can be generalized to nondeterministic systems. Various definitions are proposed, including definitions that state that the system enforces as much of the policy as possible in the context of attacks in which groups of agents collude by sharing information through channels that lie outside the system. Relationships between the various definitions proposed are characterized, and an unwinding-based proof technique is developed. Finally, it is shown that on a specific class of systems, access control systems with local non-determinism, the strongest definition can be verified by checking a simple static property. Index Terms—security; information-flow; noninterference; nondetermism; access control

I. I NTRODUCTION Intransitive noninterference policies are formal informationflow security policies that characterize the causal structure of a system. The original theory of intransitive noninterference [11], [21] dealt with a deterministic systems model, and is based on an “intransitive purge” function, that generalizes the “purge” function used by Goguen and Meseguer [8] to give semantics to transitive policies. Recent work of van der Meyden [24] has shown that even in the setting of deterministic systems, there are some flows of information that are not captured by the semantics for policies based on an intransitive purge function. This leads to a proposal of an alternate semantics called TA-security, which has been shown to lead to a more satisfactory theory. Our main contribution in this paper is to develop generalizations of TA-security to a nondeterministic system model. The question of how to generalize semantics for information-flow security from deterministic to nondeterministic systems is one that has generated a significant body of research. The bulk of this literature has concentrated on a very simple transitive policy, in which we have just two security domains High and Low, with information permitted to flow from Low to High but not from High to Low. We begin by reviewing several issues from this literature: deducibility, inference from observations versus views, and persistence. In particular, we show how to give a very transparent characterization of some existing definitions using a notion of relative information that generalizes deducibility. We then present a novel example that highlights that there exists in nondeterministic systems an issue that does not appear to have received much attention in the literature on

noninterference: the possibility that groups of domains may be able to collude and make inferences that one would expect to be prohibited. This holds even for a simple transitive policy (which says that no domain interferes with another). This then leads us to incorporate security against collusion attacks in our new definitions of security for richer, intransitive policies in nondeterministic systems. We consider two variants of such attacks, depending on whether the colluding agents share information during or after the run of the system. Our definitions can take as an additional parameter the property of persistence, which distinguishes between inference-based and causal definitions of security. We show that all our definitions generalize existing definitions in a variety of concrete settings. In particular, for deterministic systems we generalize TA-security. For nondeterministic systems, we generalize variants of Correctability [13] in case of the simple policy with just High and Low. One of the main methodologies for verification that a system satisfies a noninterference style security property is an inductive proof method based on the existence of an “unwinding relation”, a binary relation on the states of the system [9]. We develop a variant of this technique that is sound for all our definitions. We also develop a new nondeterministic class of access control systems, where a static check suffices to verify security with respect to our definitions. The structure of the paper is as follows. In Section II we define the formal systems model in which we work. Section III introduces information-flow policies, provides some background on prior art, and gives our new example showing that collusion attacks present an issue for noninterference policies. In Section IV we then define our generalizations of TA-security for nondeterministic systems. The following sections investigate the properties of these definitions. Section V characterizes their content with respect to some well-studied situations. Section VI develops an unwinding proof method for these definitions, and Section VII concerns our generalization of static analysis based on access control. We discuss related work and conclude with a discussion of future directions in Section VIII. II. S YSTEMS M ODEL Goguen and Meseguer [8] developed a theory of information-flow that has formed a starting point for later research. They introduced the following policy model.

Definition 1 A noninterference policy for a set U of security domains is a reflexive binary relation 7→ on U, representing the permitted interferences between domains. Intuitively, u 7→ v represents that the policy permits information to flow from domain u to domain v. Another intuition for this is that actions of domain u are permitted to have causal effects on, or interfere with, domain v. Conversely, when u67→v, domain u is not permitted to interfere with v. Reflexivity is assumed since, in general, nothing can be done to prevent flows of information from a domain to itself, so it is not sensible for the policy to prohibit this. A machine with domains U is a tuple M = (S, s0 , A, −→, obs, dom) with • a set S of states, including • a designated initial state s0 ∈ S, • a set A of actions, • a (nondeterministic) transition relation −→ ⊆ S × A × S • an observation function obs : U → (S → O), for some set O, and • a domain function dom : A → U. We write obsu for the function obs(u) : S → O, which represents the observation that domain u makes at each state of the machine. Following much of the security literature, we assume that the transition relation is input-enabled: for all a states s and actions a, there exists a state t such that s −→ t. This helps to prevent enabledness of actions being a source of information. A machine is deterministic if for all states s, t, t0 a a and actions a, if s −→ t and s −→ t0 then t = t0 . a) Notational conventions: Sequences play an important role in this paper. Most of the time, we denote a sequence such as [a, b, c] by just abc, however, if the sequence elements have structure themselves, or if confusion is likely to arise, we tend to separate sequence elements with a dot: a · b · c. A concise way to define A and dom is to list, for each u ∈ U, the set Au = { a ∈ A | dom(a) = u } of actions of that domain. Given a domain u, we write 7→ u for { v ∈ U | v 7→ u } and 67→ u for U \ 7→ u = { v ∈ U | v67→u }. We also write u7→ for { v ∈ U | u 7→ v } and u67→ for U \ 7→ u. b) Diagrammatic convention: We use the following convention in diagrams representing machines. States are represented by circles labelled internally by the state name. The a initial state is always named s0 . A transition s −→ t is represented by an edge from s to t labelled by a. To remove clutter from diagrams, we elide edges corresponding to selfa loops s −→ s unless we wish to draw attention to them; thus, if there is no edge from a state labelled by an action a, then using the input-enabled assumption, we infer that the missing edge is a self-loop. (Note that if there does exist an edge labelled a from state s, we do not make this inference.) States are labelled externally by observations of some of the domains: the details of this depend on the example and are given with each diagram. a1 an A run is a sequence of the form s0 −→ s1 . . . −→ sn in which n ≥ 0, the si ∈ S are states (the first being the initial

state) and the ai ∈ A are actions, such that (si−1 , ai , si ) ∈ −→ for all i = 1 . . . n. We write R(M ) for the set of runs of machine M . The function last maps a nonempty sequence to its final element. A state is reachable if it is the final state of some run. The sequence of all actions in a run r is denoted Act(r). With M implicit, we also write R(α) for the set of r ∈ R(M ) such that Act(r) = α. For a domain u, we write Act u (r) for the subsequence of actions a in Act(r) with dom(a) = u. The view of a run obtained by a domain, or group of domains, records all the actions and observations of the domain or group during the run, except that stuttering observations are collapsed to a single copy to model that the agent/group operates asynchronously, so does not perceive the passing of time unless some event happens to it. Let X ⊆ U be a nonempty set of domains. We define the joint observation function of X as obsX : S → OX by obsX (s)(u) = obsu (s) for u ∈ X. That is, obsX (s) is the tuple of observations made by the domains u ∈ X on state s. The view function viewX : R(M ) → OX (A(OX )+ )∗ is defined inductively by viewX (s0 ) = obsX (s0 ) and a

viewX (r −→ q) = ( viewX (r) · a · obsX (q) if dom(a) ∈ X viewX (r) ˆ◦ obsX (q) otherwise where “ˆ◦” denotes absorptive concatenation, that is, ( α if last(α) = a α ˆ◦ a = α · a otherwise. We write the special case where X = {u} is a singleton as viewu . We note that viewX (r) may contain information about the order of actions from domains in X that cannot be deduced from the collection of views (viewu (r))u∈X . Intuitively, viewX (r) is the information that the group would obtain in r when the members of the group share their local information with the group at each step of the run, whereas the collection (viewu (r))u∈X corresponds to the information that the group would have if the members shared their views only after r has completed. III. BACKGROUND In this section we review some of the literature on information-flow security and formulate versions of existing definitions within our system model. This review will motivate several of the ingredients we use in the new definitions we propose later. This section is largely a review of the existing literature, however, we do give new characterizations of some existing definitions, using a notion of relative information, that helps to clarify the relationship to the new definitions we introduce later. A. Noninterference, policy H67→L Goguen and Meseguer’s formal semantics for noninterference policies is restricted to transitive policies in deterministic machines. They define for each domain u ∈ U the purge

function purgeu : A∗ → A∗ that maps a sequence of actions to the subsequence of actions a with dom(a) 7→ u. Definition 2 A machine M satisfies noninterference (NI) with respect to 7→ if for all domains u and all runs r, r0 of M with purgeu (Act(r)) = purgeu (Act(r0 )), we have obsu (last(r)) = obsu (last(r0 )). It can be shown that, in deterministic machines, this is equivalent to the following: for all runs r, r0 of M with purgeL (Act(r)) = purgeL (Act(r0 )), we have viewL (r) = viewL (r0 ). This presentation makes it clear that the definition says that the information in L’s view in a run depends only on the L actions in the run, and are independent of the H actions. One of the main questions of study since the seminal work of Goguen and Meseguer on deterministic systems is how their definitions should be generalized to nondeterministic systems. A great deal of the literature on this topic has been concerned with the simple two domain policy (which we write as H67→L) given by the relation {(L, L), (H, H), (L, H)} on the set U = {L, H}, comprised of the low-level (public) domain L and the high-level (classified) domain H. This is in part because even this simple policy presents many subtleties in the setting of nondeterministic systems, but also because there has been a view that it is possible to reduce arbitrary policies to this special case. Numerous definitions have been proposed that generalize NI for the policy H67→L to nondeterministic systems. We now discuss a few of these that highlight issues that are relevant to the novel definitions that we introduce later. B. Nondeducibility and Relative Information Sutherland [22] proposed to interpret the policy H67→L, as stating, informally, that L cannot deduce information about H, and gave a general formal account of deducibility using a relation on functions with domain the state space of the system. Suppose W is a set of ‘worlds’, and f and g are functions with domain W , representing possible views that agents may have of the world. Definition 3 Say no information flows from f to g, in system W , or g is nondeducible, given f , if for all w, w0 ∈ W , there exists w00 with f (w00 ) = f (w) and g(w00 ) = g(w0 ). A typical application of this notion is the following semantics for H67→L. A machine M satisfies nondeducibility on inputs [22] if, with W taken to be the set of runs of M , no information flows from Act H to viewL . That is, L cannot deduce from its view viewL (r) of a run r, what is the set of actions Act H (r) that H has performed, in the sense that all sequences of H actions are consistent with every possible L view. A generalization of nondeducibility will be useful for what follows, to make explicit a common logical structure that underlies the definitions we propose. Definitions similar to the following have been considered by Halpern and O’Neill [12] and More et al [17].

Definition 4 Let f , g and h be functions, each with domain the set W of ‘worlds’. We say that in W , the function f contains no more information than g about h, if for all w, w0 ∈ W such that g(w) = g(w0 ) there exists w00 such that h(w00 ) = h(w0 ) and f (w00 ) = f (w). As an application of Definition 4, note that nondeducibility on inputs states that every L view is consistent with every sequence of H actions, but in order to combine a particular L view and a particular sequence of H actions into a single run, it may be necessary to use a particular interleaving of the H and L actions. Thus, L may still have conditional information about H, e.g., L may know that if H did some particular sequence of actions, then it must have been interleaved with L’s view in some particular way. The following stronger definition states that L lacks even this weak type of information about H activity. Definition 5 A machine M for the policy H67→L satisfies correctability (COR) if for all runs r and sequences α ∈ A∗ with Act L (r) = Act L (α), there exists a run r0 with Act(r0 ) = α and viewL (r0 ) = viewL (r). We call this notion correctability in view of its similarity to a notion of that name from [13]. (The main difference is in the underlying system model: correctability is defined on trace semantics, with an assumption that a trace extended by an input action is also a trace. We will not attempt here to make a formal connection to our system model.) It is easily seen that correctability may be given a clean characterization using the above notion of relative information as follows: M satisfies correctability iff in R(M ), the function viewL contains no more information about Act than purgeL ◦ Act. C. Set-Based Dependencies The main difference between deterministic and nondeterministic systems is that whereas, in a deterministic system, each sequence of actions is associated with a single run, in a nondeterministic system, it is associated with a set of runs. This inhibits straightforward generalizations of noninterference based on the idea that the view of domain u should depend only on purgeu , since there may be many distinct views consistent with even a fixed sequence of actions. One possible response to this difficulty is to focus instead on dependencies at the level of sets of runs, rather than on individual runs. The notion of relative information of the previous section can be given a neat characterization in this way. Using a notation reminiscent of probability theory, for functions f and g with domain W , and value v in the range of f , define poss(g | f = v) to be the set { g(w) | w ∈ W ∧ f (w) = v }. Then we have the following result. Proposition 6 1 f contains no more information than g about h iff for all worlds w, if vf = f (w) and vg = g(w), then poss(h | g = vg ) ⊆ poss(h | f = vf ). 1 Proofs

of all claims are given in the appendix.

The following provides an alternate, more symmetric formulation in a special case that applies to many of the definitions we consider. Proposition 7 Let f be a function with domain W , let g : W → V be surjective and let h be a function with domain V (so that h ◦ g also has domain W ). Then in W , the function f contains no more information than h ◦ g about g iff for all v, v 0 ∈ V with h(v) = h(v 0 ) we have poss(f | g = v) = poss(f | g = v 0 ). Since we work with input-enabled machines, the function Act : R(M ) → A∗ is surjective. As noted above, we may express M satisfies COR as: in R(M ), the function viewL contains no more information than purgeL ◦ Act about Act. By Proposition 7, an equivalent formulation is that for all α, α0 ∈ A∗ with purgeL (α) = purgeL (α0 ) we have poss(viewL | Act = α) = poss(viewL | Act = α0 ). This states COR in a form that clarifies its relationship to the view-based formulation of NI, by showing that that COR is obtained by generalizing the single-valued view function of a deterministic machine used in NI by a set-valued view function in the nondeterministic setting. D. Observation-based definitions are too weak in nondeterministic systems For the policy H67→L, the definition of noninterference states that the observation of L in the final state of a run r should depend only on purgeL (Act(r)). As noted above, in deterministic systems, this is equivalent to the statement that the history viewL (r) of everything that is observable to L in the run should depend only on purgeL (Act(r)). However, as the following example shows, a similar equivalence between definitions stated in terms of observations in the final state of a run and definitions stated in terms of the view on the run does not hold in nondeterministic systems. Example 8 Consider the security policy H67→L and the nondeterministic machine M depicted in Fig. 1, where states are labeled externally with L’s observation, A = {`, h}, dom(`) = L, and dom(h) = H. Suppose we define an observationbased version of correctability: M is obs-COR if in R(M ), the function obsL ◦ last contains no more information than purgeL ◦ Act about Act. Equivalently, by Proposition 7, for all α, α0 ∈ A∗ , if purgeL (α) = purgeL (α0 ) then poss(obsL ◦ last | Act = α) = poss(obsL ◦last | Act = α0 ). It can be seen that M satisfies obs-COR: the only H transition that can affect L observations is that from q00 , but if this is added or deleted from a run, there exists another run with the same subsequent sequence of actions ending in the same final observation for L. However, the possible views may differ, depending on whether h occurred: note that purgeL (h``) = `` = purgeL (``), but poss(viewL | Act = ``) = {0`1`2, 0`2`1} whereas poss(viewL | Act = h``) = {0`1`1, 0`2`2}. Thus, this machine does not satisfy COR. Intuitively, it is insecure, since L can determine from its view whether the initial h action occurred.

h

s0 `

`

`

0

s1 `

`

s01 `

1 Fig. 1.

`

0

`

s02

s11

s12

2

1

2

Deductions ought to be based on views rather than observations

This example points to the fact that in defining security in nondeterministic systems, we need to take the evidence from which an agent makes deductions to be its view, i.e., all that it could have observed to the present moment, rather than just its current observation, as in the definition of noninterference and some of its later generalizations (e.g., the definition of intransitive noninterference in deterministic systems [11], [21]). This point has sometimes been missed in the literature on nondeterministic systems, e.g., see the discussion of the work of von Oheimb [25] in Section VIII below. E. Persistence Two different intuitive interpretations of the notion of noninterference can be given: an epistemic interpretation which says that L is not able to know, or deduce anything about H activity, and a causal interpretation, which says that H actions may not have any causal effect on L observations. In deterministic systems, these interpretations may coincide, but this is no longer the case in nondeterministic systems, as the following example shows.

`

s6

1

`

s5

0

`

s4

1

h

s3

s2 ` s0 `

Fig. 2.

s1 `

s7

0

A machine that is COR but not P-COR.

Example 9 Consider the machine depicted in Fig. 2 for the policy H67→L. States are labeled externally with L’s observation, except for states on which L observes ⊥, in which case we elide the observation. As in the previous example, A = {`, h}, dom(`) = L, and dom(h) = H. It can be seen that this machine satisfies the epistemic notion COR: domain L cannot make any deductions from its view about H actions, or how these are interleaved with its own. (Recall that we elide self-loops.) However, it can reasonably be argued that this machine is not secure on a causal interpretation of security: note that by performing or not the action h from the state s1 , domain H is able to influence whether L subsequently observes 0 or 1.

Examples such as this can be addressed using the notion of persistence, which has been factored into a number of definitions in the literature [5], [7], [18]. Definition 10 (Persistence) For a security definition X, we say that machine M = (S, s0 , A, −→, obs, dom) persistently satisfies X (P-X) with respect to a policy 7→ if, for all reachable states s of M , the machine (S, s, A, −→, obs, dom) satisfies X with respect to 7→. Note that the machine in Example 9 is not P-COR.2 F. Collusion For policies that generalize from the two-domain setting of the policy H67→L in nondeterministic systems, the issue of collusion becomes of concern. As we illustrate in the present section, this is so even for transitive security policies. The simplest type of policy for which this point can be made is the separability policy [15], [20] which says that no domain may interfere with any other. That is, for set of domains U, separability is the policy ∆U = { (u, u) | u ∈ U }. In the case that U consists of two domains A and B, this seems to say that A may not interfere B, and vice versa, so one apparently reasonable interpretation of the policy is to apply a semantics for H67→L for all domains. This idea suggests the following definition, when we apply the correctability semantics for H67→L to each domain, and use our relative information formulation. Definition 11 A machine M for a set U of domains satisfies mutual correctability (MCOR) with respect to 7→ if for all domains u ∈ U, in R(M ), viewu contains no more information than purgeu ◦ Act about Act. In the case of the policy 7→ = ∆U , this says that no domain is able to deduce from its view anything about what actions other domains have performed, or how those actions were interleaved with its own. It turns out that, in some circumstances, this is an insufficient guarantee. Suppose that we have a system with three separated domains, i.e., U = {H, L1 , L2 } and the policy is ∆U . However, L1 and L2 are corrupt, and collude by communicating via a channel that lies outside the system. Under these circumstances, there is nothing that can be done in practice to prevent information-flow between L1 and L2 , even if the system is secure, so that it is not the cause of the information-flow. However, we expect that, since the system enforces the policy, H’s information is still protected from leakage to L1 and L2 . In fact, mutual correctability is too weak to provide such a guarantee.

Example 12 Consider the machine depicted in Fig. 3 under the policy ∆{H,L1 ,L2 } . The domain H observes ⊥ at all states. The two low domains L1 and L2 have observations in the set {⊥, 0, 1}. These are indicated in Fig. 5 by labelling states to the right above and below by the observations made by L1 and L2 , respectively. We omit the observation ⊥ to reduce clutter. It can be verified that machine M satisfies MCOR. However, H’s information is not secure against collusion by the coalition L = {L1 , L2 }. We may represent the information held by the coalition L in a run r by viewL (r), using the set-based view definition introduced above. Similarly, we may generalize the purge function to the coalition by writing purgeL (α) for the subsequence of actions a in α ∈ A∗ with dom(a) ∈ L. Let α = `1 `2 and β = h`1 `2 . Note that purgeL (α) = α = purgeL (β). But 0 0 poss(viewL | Act = α) = { ⊥ ⊥ `1 ⊥ `2 0 , 0 0 6= { ⊥ ⊥ `1 ⊥ `2 1 ,

1 ⊥ `2 1 ⊥ `2

1} 1 1} 0

= poss(viewL | Act = β) . Intuitively, although the policy suggests that the coalition should not be able to distinguish between the sequences α and β, in fact they can, using the parity of their final observations. 0

s1

`2

0

s9 0

`1 `2 s0 `1

`1

s2

0

s10

0

0

1

1

s3

`2

s11 1

`2

1

s4 1

h

0

s5

`1 `2

s12 1 0

s13 1

`1 `2

`1

s6

1

s14

0

s

0

1

`1

s7

1

`2

s15 0

`2

0

s8 2 We

remark that in the case of COR, the notion P-COR is in the spirit of the notion 0-forward correctability of Johnson and Thayer [13], which says that an addition or deletion from a trace of an H action requires only changes to subsequent H observations to obtain another trace that looks the same to L. (We will not attempt here to make this correspondence precise.)

⊥` ⊥ 1 ⊥` ⊥ 1

1 Fig. 3.

`1

s16 1

An insecure system that satisfies MCOR.

There has been some informal recognition in the literature of the relevance of coalitions when dealing with policies beyond H67→L, but there appear to have been very few formal studies of the issue. (We defer discussion of the related literature to Section VIII.) One of our contributions in this paper is to pursue such a formal study in the general setting of intransitive policies. G. Intransitive Noninterference The final background we require concerns semantics for intransitive policies: this issue has been studied primarily in the setting of deterministic systems. One of the main motivations for inH D L transitive policies is to represent the role of trusted components within an architectural design of a system. A canonical Fig. 4. Policy HDL. example of this is a downgrader, a trusted component that manages declassification of high-level secrets to low-level domains. A policy for such a system is the policy HDL = {H, D, L}2 \ {(H, L)} depicted in Figure 4. Here D represents a trusted downgrader component. Even in deterministic systems, NI is not an adequate definition of security for such intransitive policies, since it implies that L can learn nothing about H activity (not even when the downgrader permits it). To give a more adequate semantics, Haigh and Young [11] generalized the definition of the purge function to intransitive policies; we follow the formulation of Rushby [21]. Intuitively, the intransitive purge of a sequence of actions with respect to a domain u is the largest subsequence of actions that could form part of a causal chain of effects (permitted by the policy) ending with an effect on domain u. More formally, the definition makes use of a function src : U × A∗ → P(U) defined inductively by srcu () = {u} and srcu (aα) = srcu (α) ∪ { dom(a) | ∃v ∈ srcu (α) (dom(a) 7→ v) } for a ∈ A and α ∈ A∗ . Intuitively, srcu (α) is the set of domains v such that there exists a sequence of permitted interferences from v to u within α. The intransitive purge function ipu : A∗ → A∗ for each domain u ∈ U is then defined inductively by ipu () =  and, for a ∈ A and α ∈ A∗ , ipu (aα) = ( a · ipu (α) if dom(a) ∈ srcu (aα) ipu (α) otherwise. Haigh and Young’s definition of security uses the intransitive purge function in place of the purge function in Goguen and Meseguer’s definition. Using our relative information formulation, the following is equivalent: Definition 13 A deterministic machine M is IP-secure with respect to a (possibly intransitive) policy 7→ if for all u ∈ U, in R(M ), the function obsu ◦ last contains no more information than ipu ◦ Act about Act.

As with NI, an equivalent statement is that for all u ∈ U, in R(M ), the function viewu contains no more information than ipu ◦ Act about Act. It can be seen that ipu = purgeu when 7→ is transitive, so IP-security is in fact a generalization of the definition of security for transitive policies. It has been argued by van der Meyden [24] that IPsecurity misses some subtle flows of information relating to the ordering of events. In response, he proposes an alternate semantics that is based on a function tau for each domain u, that is intended to capture the maximal information about actions that domain u may have, according to the policy. This function is inductively defined by tau () =  and ( (tau (α), tadom(a) (α), a) if dom(a) 7→ u tau (αa) = tau (α) otherwise. Intuitively, this definition resembles a full-information protocol, in which, when performing an action a after sequence α, domain dom(a) sends everything that it knows (as represented by tadom(a) (α)), as well as the fact that it has performed action a, to every domain u to which it is permitted to transmit information. The recipient domain u adds this new information to its existing information tau (α). Domains to which dom(a) is not permitted to send information learn nothing when a happens. This function forms the basis for a definition of security which may be formulated using relative information as follows: Definition 14 A deterministic machine M is TA-secure with respect to a policy 7→ if obsu ◦ last contains no more information than tau ◦ Act about Act. Again, it proves to be equivalent to say that viewu contains no more information than tau ◦Act about Act. This definition is equivalent to NI in the case of transitive policies. IV. M AIN D EFINITIONS We are now positioned to state our new definitions, which give meaning to intransitive noninterference policies in nondeterministic machines. As noted above, our focus in this paper is with definitions that place constraints on the flow of information concerning the actions that have been performed. We furthermore focus on generalizing the notion of TAsecurity from van der Meyden [24]. We characterized TA-security in deterministic machines as stating that for each domain u, the function obsu ◦last contains no more information than tau ◦ Act about Act, and noted that it is equivalent to take viewu in place of obsu ◦ last in this definition. However, in nondeterministic systems, an observation-based definition of security may be too weak, as shown in Section III-D. This suggests the following as an appropriate generalization of the definition to nondeterministic systems. Definition 15 A nondeterministic machine M is nTA-secure with respect to a policy 7→ if, for all u ∈ U, the function viewu contains no more information than tau ◦ Act about Act.

Intuitively, as in the deterministic case, this definition places, in each run r, an upper bound on the information about the action sequence Act(r) that is permitted to be contained in each possible view viewu (r): it may be no more than the information contained in tau (Act(r)). By Lemma 7, an equivalent statement is that for all α, β in A∗ , if tau (α) = tau (β) then poss(viewu | Act = α) = poss(viewu | Act = β). As we noted above in Section III-F, a nondeterministic system may be secure with respect to a definition of security that constrains flow of information to individual domains, while allowing flows of information to groups of domains. If the system needs to be protected against collusion, this means that we require a stronger definition that takes into account the deductive capability of groups. One way to approach such a definition is to focus on what the group would know if agents in the group were, after the completion of a run of the system, to share what they have observed. We call this a post-hoc coalition. The state of information of such a coalition X ⊆ U may be represented by the function hviewu iu∈X . We would like them to be able to deduce no more than they would be able to deduce if, after the run, they were to share their maximal permitted information, represented by the terms tau (Act(r)) for u in the group. This leads to the following definition. Definition 16 A nondeterministic machine M is nTA-secure with respect to post-hoc coalitions (PCnTA) for a policy 7→ if, for all nonempty sets X ⊆ U, hviewu iu∈X contains no more information than htau iu∈X ◦ Act about Act. By Lemma 7, an equivalent statement is that for all X ⊆ U and α, β in A∗ , if ∀u ∈ X (tau (α) = tau (β)) then poss(hviewu iu∈X | Act = α) = poss(hviewu iu∈X | Act = β) . One situation in which this attack model is appropriate is a system in which multiple agents submit computations to a cloud server, and receive output only after the computation is complete. In a more interactive setting, the colluding coalition X may be able to do more than share information at the end of the run: they may be able to share information at each step of the run. We may call this a runtime coalition. Since our systems model is asynchronous, agents in X are typically not able to deduce a linear ordering on the actions performed by members of X from the set of views viewu (r) for u ∈ X. However, if they are able to communicate each time that an action is performed by one of them, then at the end of the run their information is the joint view viewX (r), which does contain the order of actions in domains X. This means that reasoning based on viewX represents a stronger attack model. The following definition attempts to state that the system is resilient to this stronger type of attack. In order to formulate it, we need to characterize the maximal permitted information for a group X, if they are colluding by sharing information at each step. In the previous definition, the group’s maximal permitted information in a run r is represented by the collection of terms tau (Act(r)) for u ∈ X.

In general, this will be too weak to allow deduction of the order of the actions performed in domains X, whereas as we just noted, the group can deduce this from its joint view viewX (r). This suggests that we need a stronger definition of the maximal permitted information on this attack model. We take the approach here that if the group cannot be prevented from sharing information through channels lying outside the system, its maximal permitted information in the event of such collusion is represented by sharing of the maximal permitted information at each step of the computation. This leads us to generalize the definition of ta to take as a parameter, in place of a single agent u, a nonempty set X ⊆ U. Inductively, we define taX by taX () =  and, for α ∈ A∗ and a ∈ A, taX (αa) = ( 7→ (taX (α), tadom(a) (α), a) if X ∩ dom(a) 6= ∅ taX (α) otherwise It is easily seen that ta{u} (α) = tau (α), so this is in fact a generalization of the previous notion. Using this notion, we obtain the following definition. Definition 17 A nondeterministic machine M is nTA-secure with respect to runtime coalitions (RCnTA) for a policy 7→ if for all nonempty sets X ⊆ U, viewX contains no more information than taX ◦ Act about Act. By Lemma 7, an equivalent statement is that for all X ⊆ U and α, β in A∗ , if taX (α) = taX (β) then poss(viewX | Act = α) = poss(viewX | Act = β). We have the following straightforward implications between these notions: Proposition 18 RCnTA implies nTA; PCnTA implies nTA. These containment relationships are strict, as is shown by the following two examples. Example 19 For a machine that is RCnTA but not PCnTA, Consider M as given in Fig. 5 under the security policy ∆{H1 ,H2 } . The two domains have observations in the set {⊥, 0, 1}. Each domain Hi has one action, hi . Below each obs (s) state s of M there is a label of the form obsHH1 (s) to indicate 2 the observations made by the two domains in s. machine M is not only nTA but also RCnTA for ∆{H1 ,H2 } . For instance, for α = h1 h2 , β = h2 h1 , and the coalition H = {H1 , H2 } we have taH (α) = (taH (h1 ), taH2 (h1 ), h2 ) = ((, , h1 ), , h2 ) 6= ((, , h2 ), , h1 ) = taH (β) and thus it is considered secure that poss(viewH | Act = α) 0 0 = {⊥ ⊥ h1 ⊥ h2 0 , 0 ⊥ 6= { ⊥ ⊥ h2 0 h1 0 ,

⊥h ⊥ 1 ⊥h ⊥ 2

1 ⊥ h2 ⊥h 1 1

1, 0 0, 1

⊥h ⊥ 1 ⊥h ⊥ 2

1 ⊥ h2 ⊥h 1 1

1} 1 1} 1

= poss(viewH | Act = α) . On the other hand, we have that M is not PCnTA. The individual taHi (α) = taHi (hi ) = taHi (β), for i = 1, 2.

Checking the other cases, it can be verified that machine M is PCnTA for H N LL.

s0 h1

h2

s1

s2

0 ⊥

⊥ 0 h1

h2

⊥ ⊥

h1

h2

h2

s3

s4

1 ⊥

⊥ 1 h1

h2

h1

s5

s6

s7

s8

0 0

1 0

1 1

0 1

Fig. 5.

A system that is RCnTA but not PCnTA. h

h

2 1 s6 ∈ R(α) gives the pair s3 −→ But r = s0 −→ hviewH1 (r), viewH2 (r)i = h⊥h1 1, ⊥h2 0i of views. There is no run in R(β) with this pair of views.

s3

h s1

`2 s0

`2

0

h

0

0

s4

s2

1

1 Fig. 6.

Evidently, like the machine of Example 9, the machine of Example 20 should not be considered to be secure on a causal interpretation of security, since at state s1 , the occurrence of the H action h causes a change to the L1 observation, whereas H is not supposed to interfere with L1 . To obtain notions of security that avoid this difficulty, we can apply the persistence construct of Definition 10. This yields several further notions of security: P-nTA, P-PCnTA and P-RCnTA. Plainly the latter two imply the former, and each P-X implies the source notion X from which it is derived. We now present some examples that show that these notions are all distinct.

A system that is PCnTA but not RCnTA for H N LL.

Example 21 Reconsider the machine of Example 20, but with an additional edge H 7→ L2 in the security policy. That is, we have an instance of the well-known downgrader policy where D happens to be called L2 . Making the policy more liberal plainly cannot turn a secure system to an insecure system (for any of our definitions) so M is still PCnTA. Moreover, now the counter-example to this from Example 20 no longer works because now we have that taX (α) = (taX (`2 ), taH (`2 ), h) = ((, , `2 ), (, , `2 ), h) differs from taX (β) = (, , `2 ). It can be verified that M now satisfies RCnTA. However, it is immediate that this example does not satisfy any of the persistent notions of security (in particular, it does not satisfy the weakest of these, P-nTA) because the H action h changes the L1 observation from state s1 .

We next show that PCnTA does not imply RCnTA. Example 20 Consider the security policy H N LL consisting of a High domain H and two Low domains L1 and L2 who are all allowed to interfere with each other, except that H is not permitted to interfere with any Low domain. Let M be the machine depicted in Fig. 6 where the only actions are `2 in domain L2 and h in domain H. Only L1 has nontrivial observations, in {0, 1}, as indicated by the external labels below states. Machine M is not only nTA but also PCnTA for H N LL. That it is not RCnTA for H N LL can be seen by comparing α = `2 h and β = `2 for the coalition X = {L1 , L2 }. We have taX (α) = taX (β), by definition, but

Example 22 We show that P-nTA does not imply P-RCnTA, and at the same time that P-PCnTA also does not imply PRCnTA. Recall the policy H N LL (see Example 20) and let M be as depicted in Fig. 7. Only L1 has non-trivial observations in {0, 1}, indicated by labels below states. The other domains observe ⊥ at all states. Actions `2 and h belong to domains L2 and H, respectively. Machine M is P-nTA for H N LL, and also P-PCnTA.

poss(viewX | Act = α) 0` 0, = {⊥ 2⊥ 0` 0, 6= { ⊥ 2⊥

0 ⊥ `2 0 ⊥ `2

s0

= poss(viewX | Act = β) .

0

Note that this problem does not show up in the views considered separately: poss(hviewu iu∈X | Act = α) = {h0, ⊥`2 ⊥i, h01, ⊥`2 ⊥i} = poss(hviewu iu∈X | Act = β)

s1

h

1 0 0 1 ⊥ , ⊥ `2 ⊥ ⊥ } 1 ⊥}

s3

`2

`2

`2

`2

0

0

s4

s2

1

1 Fig. 7.

A system that is P-nTA and P-PCnTA but not P-RCnTA.

That it is not P-RCnTA can be seen by comparing α = h`2 `2 and β = `2 `2 for the coalition X = {L1 , L2 }. We have

taX (α) = ((, , `2 ), (, , `2 ), `2 ) = taX (β) but poss(viewX | Act = α) 0` 0` 0, = {⊥ 2⊥ 2⊥ 0` 0` 0, 6= { ⊥ 2⊥ 2⊥

0 ⊥ `2 0 ⊥ `2

1 ⊥ `2 0 ⊥ `2

1 ⊥} 1 0 1 1 ⊥ , ⊥ `2 ⊥ `2 ⊥ }

= poss(viewX | Act = β) . This problem does not show up when considering the views separately: both poss(hviewu iu∈X | Act = α) and poss(hviewu iu∈X | Act = β) are equal to {h0, ⊥`2 ⊥`2 ⊥i, h01, ⊥`2 ⊥`2 ⊥}. V. S PECIAL C ASES In this section we reconsider some simple special cases of policies and systems and show that our new definitions of security reduce to some of the existing notions of security discussed above. This lends credibility to the definitions and helps clarify their positioning with respect to the existing literature. We have started with the aim of generalizing the notion of TA-security on deterministic systems, and introduced a variety of distinct definitions in the more general setting of nondeterministic systems. The following result shows that all of these notions collapse to TA-security in the deterministic setting. Proposition 23 Given a security policy, the classes of deterministic machines that are nTA-, PCnTA-, or RCnTA-secure are all equal to the class of TA-secure machines. Moreover, the persistent versions of these notions in deterministic machines are also all identical to TA-security. We have built our definitions of security for intransitive policies using ingredients from prior work on nondeterministic systems with respect to the simple policy H67→L. The following result characterizes our definitions in the special case of this policy (in nondeterministic systems) as collapsing to variants of the notion of Correctability. Proposition 24 For the policy H67→L, we have that nTA = PCnTA = RCnTA = COR and P-nTA = P-PCnTA = P-RCnTA = P-COR. We showed in Example 9 that COR and its persistent version P-COR are distinct. Thus, we have a collapse to two distinct notions in the case of this policy. VI. U NWINDING Unwinding is a useful technique for security proofs, which reduces the verification of security to checking the existence of binary relations on states. In deterministic systems, unwinding is a sound and complete technique for the proof of NI, and the existence of a weak form of unwinding relations is a sound technique for IP-security and TA-security [9], [21], [24]. We define a generalization of this method that is sound for our definitions of security in nondeterministic systems.

Definition 25 Let (U, 7→) be a security policy; let M = (S, s0 , A, −→, obs, dom) be a machine; let S = (∼u )u∈U be a family T of equivalence relations on S. For X ⊆ U define ∼X = u∈X ∼u . We say that S is a generalized weak unwinding on M with respect to 7→ (or GWU) if the following conditions are met for all s, s0 , t ∈ S, u ∈ U, nonempty X ⊆ U, and a ∈ A: s ∼u t ⇒ obsu (s) = obsu (t)

OC

a

s ∼X

LR s −→ t ⇒ s ∼dom(a)67→ t   a a t ∧ s ∼dom(a) t ∧ s −→ s0 ⇒ ∃t0 t −→ t0 ∧ s0 ∼X t0 GWSC

Condition OC implies that coalition X can distinguish states that some member can tell apart based on its observation. Condition LR implies that actions that are not allowed to interfere with anyone in the coalition cannot change a state to another state that is distinguishable to the coalition. According to GWSC, if two states s and t can be distinguished neither by coalition X nor by the domain of action a, then each state reachable from s by performing a is indistinguishable to X from some state reachable via a from t. This condition is the main point of difference with the notion of weak unwinding from [21]: it adds to the condition WSC the universal quantification over X (which is just a single domain u in WSC) and the existential quantification over t0 (this corresponds to the nondeterminism of the system, whereas WSC is designed for deterministic systems.) Next we show that the existence of a GWU is sound evidence for PCnTA-security and RCnTA-security. Theorem 26 If there is a GWU on M with respect to 7→, then M is PCnTA with respect to 7→. Theorem 27 If there is a GWU on M with respect to 7→, then M is RCnTA with respect to 7→. We may note that the definition of GWU is insensitive to the initial state of the machine. Thus, if the existence of a GWU is sound evidence for X-security, then it is also sound evidence for P-X-security. Thus, we also have the following. Corollary 28 If there is a GWU on M with respect to 7→, then M is both P-PCnTA-secure P-RCnTA-secure and with respect to 7→. Thus, by the corollary above and our previous results, the existence of a GWU implies every single one of the security notions defined in this paper. Fig. 8 summarizes the relationships we have found to hold between the various security definitions discussed above. VII. ACCESS C ONTROL S YSTEMS In the preceding sections, we developed definitions of security, and showed that the notion of generalized weak unwinding provides a sound proof technique for showing that

P-PCnTA GWU P-RCnTA Fig. 8.

PCnTA P-nTA RCnTA

nTA

Implications between notions defined in this paper.

these definitions hold in a system. Neither the definitions nor this proof technique provide much concrete guidance for the engineer seeking to construct a secure system, however. A result by Rushby [21] (and refined in [24]) shows that, in deterministic systems, intransitive noninterference is guaranteed to hold in a system built from a collection of objects subject to an access control discipline that satisfies a simple static check that is essentially Bell and La Padula’s [2] information flow condition. In the present section, we show that it is possible to generalize this result to nondeterministic systems. We first recall the notion of a machine with structured state from [21]. This is a machine M (with states S) together with (we rename some of the components) 1) a set Obj of objects, 2) a set V of values, and functions 3) contents : S × Obj → V , with contents(s, n) interpreted as the value of object n in state s, 4) observe, alter : U → P(Obj), with observe(u) and alter(u) interpreted as the set of objects that domain u can observe and alter, respectively. For brevity, we write s(x) for contents(s, x). We call the pair (observe, alter) the access control table of the machine. Rushby introduced reference monitor conditions on such machines in order to capture formally the intuitions associated with the pair (observe, alter) being an access control table that restricts the ability of the actions to “read” and “write” the objects Obj. In order to formulate a generalization of these conditions in nondeterministic systems, we first need to introduce some further structure that constrains the nondeterminism in the system. Define a choose-resolve characterization of nondeterminism in a machine M with states S and actions A to be a tuple (N, n, r), where N is a set, n : A × S → P(N ) and r : A × S × N → S, such that the following properties hold:   a ∀s ∈ S, a ∈ A, c ∈ n(a, s) s −→ r(a, s, c) CR1   a ∀s, t ∈ S, a ∈ A s −→ t ⇒ ∃c ∈ n(a, s) (t = r(a, s, c)) CR2 Intuitively, N represents the set of all possible nondeterministic choices that can be made in the process of executing an action, n restricts those choices as a function of the particular state and action being executed, and r deterministically resolves the transition, as a function of the state, action and nondeterministic choice made in that context. Condition CR1 says that the resolution of every possible nondeterministic choice, allowed in the context of a given state and action, is in fact a state to which a transition is possible. Conversely,

CR2 says that every transition can be obtained by resolving some allowed nondeterministic choice. Trivially, every nondeterministic machine has a chooseresolve characterization of nondeterminism. we can o simn (For, a ply take N = S, and define n(a, s) as t s −→ t , and r(a, s, t) = t, and CR1 and CR2 are then satisfied.) The following restrictions on the structure of the characterization make this notion more interesting. Suppose that the machine M has structured state, with objects Obj and access control table (observe, alter). Write Val(x) for the set of all possible values of x ∈ Obj. Say that the choose-resolve characterization of nondeterminism (N, n, r) has local choices if for each x ∈ Obj there exists a set Nx and a function nx : A × Val(x) → P(Nx ) such that N = Πx∈Obj Nx , and n(a, s) is the set of c ∈ N such that for all x ∈ Obj, we have cx ∈ nx (a, s(x)). That is, a choice of nondeterminism on performing action a in state s is obtained by independently making a choice of nondeterminism at each of the objects x ∈ Obj. Say that a machine has local nondeterminism if it is a machine with structured state that has a choose-resolve characterization with local choices. For states s, t and u ∈ U define s ∼oc u t to hold if s(x) = t(x) for all x ∈ observe(u). Intuitively, s ∼oc u t says that u could not distinguish s from t if all it knew were the content of objects that it is permitted to observe. Similarly, 0 for choices c, c0 ∈ Πx∈Obj Nx , and u ∈ U we write c ∼oc u c if cx = c0x for all x ∈ observe(u). The following conditions generalize the reference monitor conditions to machines with local nondeterminism. s ∼oc LC-RM1 u t ⇒ obsu (s) = obsu (t)  0 c ∈ n(a, s) ∧ c ∈ n(a, t) ∧ oc 0  ⇒ r(a, s, c)(x) = r(a, t, c0 )(x)  s ∼oc dom(a) t ∧ c ∼dom(a) c ∧ 0 s(x) = t(x) ∧ cx = cx LC-RM2 

a

s −→ t ∧ x 6∈ alter(dom(a)) ⇒ t(x) = s(x)

LC-RM3

Intuitively, LC-RM1 says that the observations of domain u depend only on the values of objects u is permitted to observe. Condition LC-RM2 says that the value of object x after performing action a depends only on the previous values of objects observable to dom(a), nondeterministic choices made locally at those objects, the previous value of x, and the nondeterministic choice made at x. Condition LC-RM3 says that, for an action a, if dom(a) is not permitted to alter the value of object x, then the value of x is in fact unaffected when a is performed. Example 29 Consider a system comprised of two domains, for a sender S, and a receiver R, communicating through an unreliable channel, represented as lossy buffer. Here we have three objects Obj = {xS, xB, xR}, representing the state of the sender memory, buffer, and receiver memory, respectively. Letting Msg be a set of messages, we may suppose that Val(xS) = Val(xR) = Msg, so that the memory of the sender or receiver consists of a single message, and Val(xB) = Msg∗ ,

so that the state of the buffer consists of a sequence of messages. A state of the system as a whole may be represented as a tuple (mS , b, mR ) ∈ Msg × Msg∗ × Msg. We suppose that the sender has actions set(m) (setting its memory to message m) and put (add the current memory value to the buffer), and that the receiver has an action get (fetch a message from the buffer and store it in the receiver’s memory). We suppose that only the buffer is unreliable, so take the possibilities for local choices at the sender and receiver to be the unit set NxS = NxR = {0}. However, the channel may drop messages, which we represent by taking NxB = {ok, drop} consisting of two choices representing normal performance of the action and dropping of the associated message. This allows us to represent the nondeterminism in the channel. For example, let nxB (s, put) = nxB (s, get) = {ok, drop}, and define r(put, (mS , b, mR ), (0, v, 0)) to be (mS , mS · b, mR ) if v = ok and (mS , b, mR ) if v = drop, and similarly define get to remove the final message (if any) from the sequence b and assign it to the third component if v = ok, and simply delete the final message from b otherwise. Suppose we define observations by obsS ((mS , b, mR )) = mS and obsR ((mS , b, mR )) = mR . Let the access control structure for the system be given by the following table S R

observe alter xS xS, xB xR, xB xB, xR

(m0S , b0 , m0R ) just when (b, mR ) = Then (mS , b, mR ) ∼oc R 0 0 (b , mR ), from which it follows that obsR ((mS , b, mR )) = mR = m0R = obsR ((m0S , b0 , m0R )), confirming part of LC-RM1. The reader may verify that the remainder of LC-RM1 holds in this system, as does LC-RM2. With respect to the policy S 7→ R and R67→S, condition LC-RM3 also holds. We also have a condition that relates the policy to the access control structure, essentially that identified by Bell and La Padula [2]. alter(u) ∩ observe(v) 6= ∅ ⇒ u 7→ v

AOI

Intuitively, if there is an object x that can be altered by u and observed by v, then x provides a channel through which information can flow from u to v. Thus, for the system to be secure, this flow must be permitted by the policy. The following result shows that access control provides a design discipline that suffices to enforce our definitions of security. Theorem 30 If M is a machine with local nondeterminism satisfying LC-RM1-LC-RM3 and AOI with respect to 7→ then M is P-RCnTA-secure and P-PCnTA-secure with respect to 7→. Example 31 In addition to the conditions LC-RM1–LCRM3, the system described in Example 29 satisfies condition AOI for the policy S 7→ R and R67→S. Note that we have alter(S) ∩ observe(R) = {xB} 6= ∅, but the corresponding edge S 7→ R holds in the policy. On the other hand R67→S,

but this is also consistent, since alter(R) ∩ observe(S) = ∅. Thus, by Theorem 30, the system is P-RCnTA-secure and PPCnTA-secure with respect to 7→. The policy in the running example so far is a very simple transitive policy. We now sketch a generalization that gives some of the flavour of MILS [4] applications of the theory.

M1

M2 H1

E1

L1

NI1

Fig. 9.

N

E2

H2

NI2

L2

Architecture for a MILS system.

Example 32 Suppose that we wish to connect two multilevel secure machines M1 and M2 to communicate across the internet. Figure 9 gives a policy that may serve as the abstract architecture of such a system. Each machine Mi contains a high-level domain Hi and a low-level domain Li , with the usual policy H67→L enforced between the two. The internet is represented by the domain N. Additionally, there are two domains E1 , E2 that represent downgraders that are responsible for encrypting all communications leaving the domains H1 , H2 , as well as domains NI1 , NI2 representing the network interface in each machine. Within each machine, access control (monitored, e.g., by a security kernel and by hardware access control features) can be used to enforce the local portion of the policy. At this lower, access control level, the system might contain objects representing communication channels, with details resembling the lossy buffer of Example 29. (For example, one expects that E1 and L1 will place their output in a buffer read by the network interface NI1 , and that domain N consists of a network of lossy buffers.) As that example demonstrates, suitable access control settings on such nondeterministic components suffice to enforce the policy. Globally, the policy ensures that there is no direct flow of information from the domains Hi to N or the domains Li . All such flow of information must be mediated by the domains Ei . This does not provide a complete proof of all security properties that one might wish the system to satisfy, but it helps to focus proofs that high-level information remains secure onto the specific trusted components Ei . The intention of the architecture is that once it has been verified that the components Ei properly encrypt all output, the fact that the architecture is enforced provides the rest of the proof of the desired security property that high-level information does not flow to low domains. We refer the reader to [4] for a more detailed explanation of this idea. VIII. R ELATED W ORK AND C ONCLUSIONS Related work: As remarked above, the original work on semantics of intransitive noninterference [11], [21] was confined to transitive policies and deterministic systems. We

have already noted, in Section III, some of the main points of connection between our contributions in this paper and the voluminous literature on the transitive policy H67→L in nondeterministic systems. We focus here on comparing our work to that of some others who have considered semantics for intransitive policies specifically in the setting of nondeterministic systems. The earliest work in this vein appears to be that of Bevier and Young [3]. Unlike most of the subsequent work, Bevier and Young are alert to the issue of collusion: they present an example to justify a component of their definition as necessary in order to deal with inferences by groups of agents. Moreover, their definition satisfies a type of persistence: in effect, it states that from each state of the system, the possible future views of a list L of agents along a sequence of actions depends only on the ip of that sequence with respect to the list L (unfortunately, this generalization of ip is left undefined). They also give a corresponding definition of unwinding. Both the definition of dependence in the definition of security and the notion of unwinding work with the concrete equivalence relations defined by s ∼u t iff obsu (s) = obsu (t). Thus, although this work is close to our notions of P-PCnTA-security and generalized weak unwinding, it differs in the use of this concrete relation and its basis in ip rather than ta. Roscoe and Goldsmith [19] have considered the application of the notion of local determinism within the setting of the process algebra CSP to give semantics to intransitive policies: they give two versions, one using a lazy abstraction operator and the other using a mixed operator that gives special treatment to signal events. It is shown formally in [23] that, in the special case of deterministic systems, the lazy abstraction approach corresponds to the application of purge function of NI to nondeterminstic systems, and the mixed abstraction approach corresponds to a notion of security van der Meyden calls ITO-security: this differs from TA security in that it uses functions that are like tau , but which track information about observations as well as information about actions. Thus, both definitions differ from the TA-based definitions we have presented. The issue of collusion is not considered in [19], however, semantics of a policy is given by checking for flows of information to each individual domain u from the noninterfering domains 67→ u, considered jointly. Mantel [14] defines an enrichment of the type of security policies we have considered (by distinguishing between observable, deducible and noninterfering events) and studies the extension of his (trace-based) Basic Security Property compositional approach to definitions of security for H67→L to this richer policy setting. His definitions make use of a restricted quantification over sets of domains. However, the intended effect of this quantification is not to capture collusion attacks, but to spread the local purge-like effect of the basic security properties to simulate ip-like noninterference conditions. Another definition of intransitive noninterference in nondeterministic systems is given by von Oheimb [25]. His approach is also based on the ip function, but relates this to the possible

final observations in a run. As we noted in Section III-D, this may be inappropriate in nondeterministic systems. A notion of unwinding is also presented that is similar to ours, in that it quantifies over sets of domains. However, the main motivation for this appears to have been to obtain a soundness result for unwinding, rather than an overt recognition of the possibility of collusion attacks, since these are not acknowledged in the definition of security. von Oheimb also applies his results to a class of Access Control systems, but these are just Rushby’s deterministic Access Control systems, whereas we have introduced a more general non-deterministic class of such systems. Bismulation-based definitions of security (an approach that is closely related to unwinding) are considered in the context of the very specific intransitive policy HDL in [5]. This work deals with the persistence dimension, but it is not clear how to generalize the definitions to policies richer than HDL. Gorrieri and Vernali [10] define similar notions of noninterference for the H67→L and HDL policies on labelled transition systems and elementary Petri nets: the emphasis here is on the particularities of these semantic models rather than essential novelty in the semantics of security, where they essentially follow [5]. Finally, Backes and Pfitzman [1] go beyond the nondeterministic setting to consider intransitive policies in systems with probabilistic transitions and cryptographic notions of information-flow. Their definitions are also limited to a few specific policies, and take a rather different approach to ours: it would take some research to clarify relationships even for these special cases. Conclusion: Our focus in this paper has been to present a conceptually clean set of generalizations of the TA-security semantics for intransitive noninterference in nondeterministic systems. We have teased out a number of parameters of such generalizations, leading to a range of definitions. We characterize the relationships between these definitions and provide an unwinding proof technique that is sound for all. Several issues are left open by this work. The TA style of semantics concentrates of the possibility for attackers to make deductions about the actions that have been performed. In a nondeterministic setting, the observations nondeterministically resulting from these actions may also need to be protected against inference attacks (e.g., when these observations are randomly generated keys). Some TA-like definitions (TOsecurity and ITO-security) that take observations into account have been presented by van der Meyden [24]. It remains to work out how these definitions should be generalized to the nondeterministic setting. Another concern is that whereas unwinding proof techniques should preferably be complete as well as sound (and some in the literature achieve both) our technique is just sound. It would be of interest to have sound and complete proof techniques for each of our definitions, in order to cover examples that lie outside of their intersection. Given the connection we establish between P-nTA and 0-forward correctability, ideas from Millen’s sound and complete unwinding for forward correctability [16] may be appropriate in this case.

R EFERENCES [1] M. Backes and B. Pfitzmann, “Intransitive non-interference for cryptographic purposes,” in Proc. IEEE Symp. on Security and Privacy, 2003, pp. 140–152. [2] D. Bell and L. L. Padula, “Secure computer system: unified exposition and multics interpretation,” Mitre Corporation, Bedford, M.A., Technical Report ESD-TR-75-306, Mar. 1976. [3] W. Bevier and W. Young, “A state-based approach to noninterference,” in Proc. IEEE Computer Security Foundations Workshop, 1994, pp. 11– 21. [4] C. Boettcher, R. DeLong, J. Rushby, and W. Sifre, “The MILS component integration approach to secure information sharing,” in Proc. 27th IEEE/AIAA Digital Avionics Systems Conference, Oct. 2008, pp. 1.C.2– 1–1.C.2–14. [5] A. Bossi, C. Piazza, and S. Rossi, “Modelling downgrading in information flow security,” in Proc. IEEE Computer Security Foundations Workshop, 2004, pp. 187–201. [6] S. Eggert, R. van der Meyden, H. Schnoor, and T. Wilke, “The complexity of intransitive noninterference,” in Proc. IEEE Symp. on Security and Privacy, S&P 2011, 22-25 May 2011, Berleley, California, USA, 2011, pp. 196–211. [7] R. Focardi and S. Rossi, “Information flow security in dynamic contexts,” in Proc. IEEE Computer Security Foundations Workshop, 2002, pp. 307–319. [8] J. Goguen and J. Meseguer, “Security policies and security models,” in Proc. IEEE Symp. on Security and Privacy, 1982, pp. 11–20. [9] ——, “Unwinding and inference control,” in IEEE Symp. on Security and Privacy, 1984, pp. 75–87. [10] R. Gorrieri and M. Vernali, “On intransitive non-interference in some models of concurrency,” in Foundations of Security Analysis and Design VI - FOSAD Tutorial Lectures, ser. LNCS, vol. 6858. Springer-Verlag, 2011, pp. 125–151. [11] J. Haigh and W. Young, “Extending the noninterference version of MLS for SAT,” IEEE Trans. Softw. Eng., vol. SE-13, no. 2, pp. 141–150, Feb. 1987. [12] J. Halpern and K. O’Neill, “Secrecy in multiagent systems,” in Proc. IEEE Computer Security Foundations Workshop. Los Alamitos, CA, USA: IEEE Computer Society, 2002, p. 32. [13] D. M. Johnson and F. J. Thayer, “Security and the composition of machines,” in Proc. IEEE Computer Security Foundations Workshop, 1988, pp. 72–89. [14] H. Mantel, “Information flow control and applications - bridging a gap,” in FME, ser. LNCS, J. N. Oliveira and P. Zave, Eds., vol. 2021. Springer-Verlag, 2001, pp. 153–172. [15] J. McLean, “A general theory of composition for trace sets closed under selective interleaving functions,” in Proc. IEEE Symp. on Security and Privacy, May 1994, pp. 79–93. [16] J. K. Millen, “Unwinding forward correctability,” in Proc. IEEE Computer Security Foundations Workshop, 1994, pp. 2–10. [17] S. M. More, P. Naumov, B. Nicholls, and A. Yang, “A ternary knowledge relation on secrets,” in Proc. Conf. on Theoretical Aspects of Rationality and Knowledge (TARK-2011), Groningen, The Netherlands, July 12-14, 2011, 2011, pp. 46–54. [18] J. Mullins, “Nondeterministic admissible interference,” Journal of Universal Computer Science, vol. 6, no. 11, pp. 1054–1070, 2000. [19] A. Roscoe and M. Goldsmith, “What is intransitive noninterference?” in Proc. IEEE Computer Security Foundations Workshop, 1999, pp. 228– 238. [20] J. Rushby, “Design and verification of secure systems,” in Proc. 8th Symposium on Operating Systems Principles, Asilomar CA, Dec. 1981, pp. 12–21, (ACM Operating Systems Review, Vol 15, No. 1). [21] ——, “Noninterference, transitivity, and channel-control security policies,” SRI International, Tech. Rep. CSL-92-02, Dec. 1992. [Online]. Available: http://www.csl.sri.com/papers/csl-92-2/ [22] D. Sutherland, “A model of information,” in Proc. 9th National Computer Security Conf., 1986, pp. 175–183. [23] R. van der Meyden, “A comparison of semantic models for intransitive noninterference,” Dec. 2007, unpublished manuscript, available at http: //www.cse.unsw.edu.au/∼meyden. [24] ——, “What, indeed, is intransitive noninterference?” in Proc. European Symposium On Research In Computer Security (ESORICS), ser. LNCS, J. Biskup and J. Lopez, Eds., vol. 4734. Springer-Verlag, 2007, pp. 235–250.

[25] D. von Oheimb, “Information flow control revisited: Noninfluence = Noninterference + Nonleakage,” in Proc. European Symposium On Research In Computer Security (ESORICS), ser. LNCS, vol. 3193. Springer-Verlag, 2004, pp. 225–243.

A PPENDIX Proof of Prop. 6: For the proof from (1) to (2), suppose f contains no more information than g about h. Let w ∈ W , and take vf = f (w) and vg = g(w). Suppose vh ∈ poss(h | g = vg ). We show that vh ∈ poss(h | f = vf ). From vh ∈ poss(h | g = vg ) we have that there exists w0 ∈ W such that g(w0 ) = vg = g(w) and h(w0 ) = vh . Since f contains no more information than g about h, it follows from g(w) = g(w0 ) that there exists w00 ∈ W such that h(w00 ) = h(w0 ) and f (w00 ) = f (w). Thus, h(w00 ) = vh and f (w00 ) = vf , and we have vh ∈ poss(h | f = vf ), as required. Conversely, to show (2) implies (1), assume that for all w ∈ W , if vf = f (w) and vg = g(w), then we have poss(h | g = vg ) ⊆ poss(h | f = vf ). Let w, w0 ∈ W satisfy g(w) = g(w0 ). We show that there exists w00 ∈ W such that h(w00 ) = h(w0 ) and f (w00 ) = f (w). Take vf = f (w) and vg = g(w). Then g(w0 ) = vg , so h(w0 ) ∈ poss(h | g = vg ). It follows that h(w0 ) ∈ poss(h | f = vf ), and hence there exists w00 ∈ W such that h(w00 ) = h(w0 ) and f (w00 ) = vf = f (w), as required.

then immediate. The proof proceeds by induction, with the following cases: 1) (Base case) α = : it is immediate from the definitions that Fu (taX ()) = Fu () =  = tau (). 2) α = α0 a with dom(a)67→v for all v ∈ X. In this case, Fu (taX (α0 a)) = Fu (taX (α0 )) = tau (α0 )

by induction

= tau (α0 a)

since u ∈ X.

3) α = α0 a with dom(a) 7→ v ∈ X but not dom(a) 7→ u. In this case, Fu (taX (α0 a)) = Fu ((taX (α0 ), tadom(a) (α0 ), a)) = Fu (taX (α0 ))

by def. of Fu

0

by induction

0

since dom(a)67→u.

= tau (α ) = tau (α a) 0

4) α = α a with dom(a) 7→ u. In this case, Fu (taX (α0 a)) = Fu ((taX (α0 ), tadom(a) (α0 ), a))

Proof of Prop. 7: Suppose that f contains no more information than h◦g about g. Let v, v 0 ∈ V with h(v) = h(v 0 ). We show that poss(f | g = v) ⊆ poss(f | g = v 0 ); the opposite containment follows by symmetry. Let vf ∈ poss(f | g = v). Then there exists w ∈ W with f (w) = vf and g(w) = v. Since g is surjective, there exists w0 ∈ W such that g(w0 ) = v 0 . Thus h(g(w)) = h(v) = h(v 0 ) = h(g(w0 )). Since f contains no more information than h ◦ g about g, there exists a world w00 ∈ W such that g(w00 ) = g(w0 ) = v 0 and f (w00 ) = f (w) = vf . Thus vf ∈ poss(f | g = v 0 ). Conversely, suppose that for all v, v 0 ∈ V with h(v) = h(v 0 ) we have poss(f | g = v) = poss(f | g = v 0 ). Let w, w0 ∈ W satisfy h(g(w)) = h(g(w0 )). Then we have poss(f | g = g(w)) = poss(f | g = g(w0 )). In particular f (w) is an element of the left-hand side of this expression, hence also in the right-hand side. Thus, there exists w00 with g(w00 ) = g(w0 ) and f (w00 ) = f (w). This shows that f contains no more information than h ◦ g about g.

Proof of Prop. 23: Let M be deterministic. The fact that M is TA-secure iff M is nTA-secure is immediate from the characterization of TA-security as stating that for all u ∈ U, the function viewu contains no more information than tau ◦ Act about Act, and the identical statement of the definition of nTA-security. Next we show that if M is TA-secure then M is RCnTAsecure. We first note the following claim: if X ⊆ U and taX (α) = taX (β) for α, β ∈ A∗ , then tau (α) = tau (β) for all u ∈ X. For this, let Fu be the mapping on the range of taX defined by Fu () =  and Fu ((A, B, a)) = Fu (A) when dom(a)67→u and Fu ((A, B, a)) = (Fu (A), B, a) otherwise. We show that Fu (taX (α)) = tau (α), and the claim is

= (Fu (taX (α0 )), tadom(a) (α0 ), a) by def. of Fu = (tau (α0 ), tadom(a) (α0 ), a) 0

= tau (α a)

by induction since dom(a) 7→ u.

Suppose that M is TA-secure. We show that it is RCnTAsecure by proving that for all α, β ∈ A∗ and X ⊆ U, if taX (α) = taX (β) then for all runs r with Act(r) = α, there exists a run r0 with Act(r0 ) = β and viewX (r0 ) = viewX (r). However, by determinism, for every sequence α ∈ A∗ , there exists a unique run r with Act(r) = α; we write rα for this run. Hence, what we need to show is that if taX (α) = taX (β) then viewX (rα ) = viewX (rβ ). The proof is by induction on the combined length of α and β. The base case of α = β =  is trivial, since in this case there exists just the single run s0 for each of α, β. By symmetry, we may consider for the induction just the case of α = α0 a and β, where a ∈ A. Assume that taX (αa) = taX (β). There are several possibilities. We use the notation s0 ↑ α for the final state of the run rα . 1) dom(a)67→u for all u ∈ X. In particular, we have dom(a) 6∈ X. In this case, we have taX (α0 a) = taX (α), so taX (α0 ) = taX (β). By induction, for all runs viewX (rα0 ) = viewX (rβ ). Moreover, we have tau (α0 a) = tau (α0 ) for all u ∈ X so, by TAsecurity, we have obsu (s0 ↑ α0 ) = obsu (s0 ↑ α0 a). Thus, obsX (s0 ↑ α0 ) = obsX (s0 ↑ α0 a). Thus, viewX (rα0 a ) = viewX (rα0 ) ◦ obsX (s0 ↑ α0 a) = viewX (rα0 ) = viewX (rβ ), as required. 2) dom(a) 7→ u for some u ∈ X. In this case, taX (α0 a) = (taX (α0 ), tadom(a) (α0 ), a). For this to equal taX (β), we cannot have β = , so we may write β = β 0 b. If dom(b)67→u for all u ∈ X, then we may swap the roles of α and β and apply the previous case, so we may assume

that dom(b) 7→ u0 for some u0 ∈ X, and taX (β) = (taX (β 0 ), tadom(b) (β 0 ), b). It follows that a = b and taX (α0 ) = taX (β 0 ). By the induction hypothesis, viewX (rα0 ) = viewX (rβ 0 ). Moreover, using the claim proved above, we have from the fact that taX (α) = taX (β) that tau (α) = tau (β) for all u ∈ X. By TAsecurity, it follows that obsu (s0 ↑ α) = obsu (s0 ↑ β) for all u ∈ X, i.e., obsX (s0 ↑ α) = obsX (s0 ↑ β). Thus viewX (rα ) = viewX (rα0 ) · a · obsX (s0 ↑ α) = viewX (rβ 0 )·a·obsX (s0 ↑ β) = viewX (rβ ), as required. This completes the proof that TA-security implies RCnTAsecurity. The converse, that RCnTA-security implies TAsecurity, is trivial from the definition. Similarly, PCnTA-security implies TA-security. Next we prove that if M is TA-secure then M is PCnTA-secure. Let X ⊆ U, and assume that α, α0 ∈ A∗ are action sequences such that tau (α) = tau (α0 ) for all u ∈ X. We need to show that for all runs r with Act(r) = α, there exists a run r0 with Act(r0 ) = α0 and viewu (r) = viewu (r0 ) for all u ∈ X. However, as above, by determinism, there is a unique run rα with Act(rα ) = α, so we need to show just that viewu (rα ) = viewu (rα0 ) for all u ∈ X. This follows directly using TA-security. Therefore M is PCnTA-secure. To show that all the persistent versions are equal to TAsecurity, it suffices to show the equivalence of TA-security and P-TA-security in deterministic machines. Since P-X-security implies X-security by definition, it is enough to show that TA-security implies P-TA-security. For this, suppose that M is TA-secure with respect to 7→, let s be a reachable state of M . We need to show that if tau (α) = tau (β) then obsu (s ↑ α) = obsu (s ↑ β). Since s is reachable, there exists a sequence γ such that s = s0 ↑ γ, where s0 is the initial state of M . A straightforward induction on |α| + |β| shows that if tau (α) = tau (β) then tau (γα) = tau (γβ). By TAsecurity of M , we have that tau (γα) = tau (γβ) implies obsu (s0 ↑ γα) = obsu (s ↑ γβ). Since s0 ↑ γα = s ↑ α and s0 ↑ γβ = s ↑ β, we have the desired conclusion.

ˆ ∪ {w} and define the security policy 7→0 on U0 by 7→0 = U∪U 7→ ∪ { (u0 , u ˆ) | u0 7→ u, u, u0 ∈ U } ∪ { (ˆ u, w) | u ∈ U } ∪ { (v, v) | v ∈ U0 }. In words, every original domain u0 is allowed to also interfere with the domain u ˆ if it was already ˆ can allowed to interfere with u, and only the domains U interfere with w. Next we define an unfolding M0 of M 0 0 0 0 0 0 by M = (S , s0 , A , −→ , obs , dom0 ), where S 0 = R(M ) × (O(AO+ )∗ ∪ {⊥}), s00 = (s0 , ⊥), A0 = A ∪ { cu | u ∈ U }, and ( a a a if a ∈ A and last(r) −→ s 0 (r −→ s, v) (r, v) −→ (r, viewu (r)) if a = cu   obsu (last(r)) if u ∈ U ˆ obs0u ((r, v)) = ⊥ if u ∈ U   v if u = w ( dom(a) if a ∈ A dom0 (a) = u ˆ if a = cu

Proof of Th. 26: Given that there exists an unwinding on a machine M = (S, s0 , A, −→, obs, dom), we need to show that if tau (α) = tau (α0 ) for all u in a set X of domains, then poss(hviewu iu∈X | Act = α) = poss(hviewu iu∈X | Act = α0 ). We proceed as follows. We first construct from M a machine M 0 in which there is a single agent whose view, in some runs, is equivalent to the required collection of views hviewu iu∈X . We show that there exists a GWU on this new machine. To obtain the desired conclusion, we then apply the fact that the existence of a GWU implies nTA-security. (This follows from Theorem 27, which is proved below, and Proposition 18). The new machine contains a larger set of security domains. ˆ = {u Let U ˆ | u ∈ U } and w be fresh security domains, i.e., distinct from each other and domains in U. Let U0 =

Case u = w. By definition of ≈w we have that v = v 0 and hence obsw ((r, v)) = v = v 0 = obsw ((r0 , v 0 )). For condition LR let a ∈ AS and (r, v), (r0 , v 0 ) ∈ S 0

Intuitively, a state (r, v) of M 0 records a history r of M together with a view v of some domain in M as the current observation of the domain w. Actions of M change only the history, and the action cu copies the current view of u in M to be the new observation of w. Let S = (∼u )u∈U be a GWU on M for 7→. We construct a new GWU S 0 = (≈u )u∈U0 on M 0 for 7→0 . Firstly, for u ∈ U, define (r, v) ≈u (r0 , v 0 ) if viewu (r) = viewu (r0 ) and ˆ define (r, v) ≈uˆ last(r) ∼u last(r0 ). Secondly, for u ˆ ∈ U, (r0 , v 0 ) if (r, v) ≈u (r0 , v 0 ). Thirdly, for domain w, define (r, v) ≈w (r0 , v 0 ) if v = v 0 . We now show that S 0 is a GWU. 0 0 • For condition OC, assume that (r, v) ≈u (r , v ). We 0 0 0 0 need to show that obsu ((r, v)) = obsu ((r , v )). We consider the three cases for u ∈ U0 . Case u ∈ U. By definition of ≈u we have viewu (r) = viewu (r0 ) and hence obsu (last(r)) = obsu (last(r0 )). It is immediate that obs0u ((r, v)) = obs0u ((r0 , v 0 )). ˆ In this case the desired conclusion is Case u ∈ U. immediate from the fact that the observations of domains ˆ are equal to ⊥ at all states. in U



a

such that (r, v) −→0 (r0 , v 0 ). We need to show that (r, v) ≈a67→0 (r0 , v 0 ), i.e., (r, v) ≈u (r0 , v 0 ) for all u ∈ U0 such that dom(a)67→0 u. We consider the two cases of a ∈ A and a = cu for u ∈ U. Case a ∈ A. In this case, v = v 0 and there exists a a t ∈ S such that r0 = r −→ t. Moreover, dom(a)67→0 u just when u ∈ U and dom(a)67→u, or u = x ˆ and dom(a)67→x, or u = w. In the case of u ∈ U and dom(a)67→u, we have by condition LR for S that last(r) ∼u t, hence by OC that obsu (last(r)) = obsu (t), from which it follows that viewu (r) = viewu (r0 ). Since we also have

last(r) ∼u t = last(r0 ), this yields (r, v) ≈u (r0 , v 0 ), as required. In the case of u = x ˆ and dom(a)67→x, we obtain by the preceding case that (r, v) ≈x (r0 , v 0 ), from which (r, v) ≈u (r0 , v 0 ) follows by definition. Finally, in the case u = w, it is immediate from v = v 0 that (r, v) ≈w (r0 , v 0 ).



Case a = cu . In this case, r = r0 and v 0 = viewu (r). From r = r0 we obtain that last(r) ∼u last(r0 ) and viewu (r) = viewu (r0 ) for all u ∈ U, so (r, v) ≈u (r0 , v 0 ) for all u ∈ U. It follows from this that also that (r, v) ≈u ˆ Since dom(cu ) = u (r0 , v 0 ) for all u ∈ U. ˆ 7→0 w, we do not need to consider the case u = w. For condition GWSC, let X ⊆ U0 , a ∈ A0 , and a (r1 , v1 ), (r10 , v10 ), (r2 , v2 ) ∈ S 0 such that (r1 , v1 ) −→0 (r10 , v10 ), (r1 , v1 ) ≈X (r2 , v2 ), and (r1 , v1 ) ≈dom(a) (r2 , v2 ). We need to show that there exists a state a

(r20 , v20 ) ∈ S 0 such that (r2 , v2 ) −→0 (r20 , v20 ) and (r10 , v10 ) ≈X (r20 , v20 ). We consider the two cases of a ∈ A and a = cx for x ∈ U. Let Y = (X∩U)∪{ u ∈ U | u ˆ ∈ X }. From (r1 , v1 ) ≈X (r2 , v2 ) we have that viewu (r1 ) = viewu (r2 ) and last(r1 ) ∼u last(r2 ) for u ∈ Y . In particular, the latter part gives that last(r1 ) ∼Y last(r2 ). If w ∈ X, we also have v1 = v2 . a

Case a ∈ A. In this case, from (r1 , v1 ) −→0 (r10 , v10 ) we have that v1 = v10 and there is t1 ∈ S such that a a r10 = r1 −→ t1 and last(r1 ) −→ t1 . We noted above that last(r1 ) ∼Y last(r2 ). Since dom(a) ∈ U in this case, we also have from (r1 , v1 ) ≈dom(a) (r2 , v2 ) that last(r1 ) ∼dom(a) last(r2 ). Hence, by GWSC for S in a M , there exists t2 ∈ S such that last(r2 ) −→ t2 and t1 ∼Y t2 . To obtain (r20 , v20 ) we take v20 = v2 and r20 = a

a

r2 −→ t2 . It is immediate that (r2 , v2 ) −→0 (r20 , v20 ), so we just need to show that (r2 , v2 ) ≈X (r20 , v20 ). Now note that by t1 ∼Y t2 , we have from GWSC that obsu (t1 ) = obsu (t2 ) for u ∈ Y . Thus, for u ∈ Y we have viewu (r10 )  viewu (r1 ) · a · obsu (t1 ) = viewu (r1 ) ˆ ◦ obsu (t1 )  viewu (r2 ) · a · obsu (t2 ) = viewu (r2 ) ˆ ◦ obsu (t2 )

if dom(a) = u if dom(a) = 6 u if dom(a) = u if dom(a) = 6 u

= viewu (r20 ) Additionally, last(r10 ) = t1 ∼u t2 = last(r20 ). Finally, if w ∈ X then we have v20 = v2 = v1 = v10 . Combining these conclusions, we get that (r2 , v2 ) ≈X (r20 , v20 ). Case a = cx . In this case, we have r10 = r1 and v10 = viewx (r1 ) and dom(a) = x ˆ for x ∈ U. To obtain (r20 , v20 ), we take r20 = r2 and v20 = viewx (r2 ),

from (r1 , v1 ) ≈X (r2 , v2 ) that (r1 , v1 ) ≈u (r2 , v2 ), so viewu (r10 ) = viewu (r1 ) = viewu (r2 ) = viewu (r20 ) and last(r10 ) = last(r1 ) ∼u last(r2 ) = last(r20 ). In case of u = w ∈ X, we have from (r1 , v1 ) ≈dom(a) (r2 , v2 ) and dom(a) = x ˆ that viewx (r1 ) = viewx (r2 ). Since v10 = viewx (r1 ) and v20 = viewx (r2 ), we have that v10 = v20 . It follows from the above that (r10 , v10 ) ≈X (r20 , v20 ), as required. Now by the fact that generalized weak unwinding implies nTA, we have the machine M 0 is nTA-secure for the new policy 7→0 . We use to this to conclude that M is PCnTA-secure with respect to 7→. Let X = {u1 , u2 , . . . un } ⊆ U be a coalition in M , and let α, α0 ∈ A∗ be two action sequences such that ta7→ u (α) = 0 ta7→ (α ) for all u ∈ X. (We make the policy explicit here u since we work with both 7→ and 7→0 .) We have to show that for all runs r0 of M with Act(r0 ) = α, there exists a run r00 with Act(r00 ) = α0 , and viewu (r0 ) = viewu (r00 ) for all u ∈ X. 0 7→ We first note that it follows from ta7→ u (α)0 = tau (α ) for all 7→0 7→ 0 u ∈ X that taw (αcu1 cu2 . . . cun ) = taw (α cu1 cu2 . . . cun ). For, it can be seen that 0

ta7→ u ˆi+1 (αcu1 . . . cui ) 0

= ta7→ u ˆi+1 (α)

for j 6= j 0 we have u ˆj 67→0 u ˆj 0

= ta7→ ui+1 (α)

by α ∈ A∗ and def. of 7→0

0 = ta7→ ui+1 (α )

by ass. and ui+1 ∈ X

=

0 0 ta7→ u ˆi+1 (α cu1

. . . cui )

by symmetry. 0

0

0 7→ Thus, inductively, with basis ta7→ w (α) =  = taw (α ), we have 0

ta7→ w (αcu1 . . . cui+1 ) 0

0

7→ = (ta7→ w (αcu1 . . . cui ), tau ˆi+1 (αcu1 . . . cui ), cui+1 ) 0

0

0 7→ 0 = (ta7→ w (α cu1 . . . cui ), tau ˆi+1 (α cu1 . . . cui ), cui+1 ) 0

= ta7→ w (αcu1 . . . cui+1 ) . Consider now a run r0 of M with Act(r0 ) = α. Extend this to a corresponding run r in M 0 with Act(r) = αcu1 . . . cun . Applying the fact that M 0 is nTA-secure with respect to 7→0 , there exists a run r0 with Act(r0 ) = αcu1 . . . cun and vieww (r) = vieww (r0 ). But (assuming, without loss of generality, that in M , distinct domains have disjoints sets of observations), we have that vieww (r) = ⊥ · viewu1 (r0 ) · . . . · viewun (r0 ) and vieww (r0 ) = ⊥ · viewu1 (r00 ) · . . . · viewun (r00 ) where r00 is the prefix of r0 with Act(r00 ) = α0 . Let r00 be the run of M corresponding to r00 . It follows that viewui (r0 ) = viewui (r00 ) for all ui ∈ X. (Note that the observations on domains in U along corresponding runs are identical in M and M 0 .) This is what we needed to conclude that M is PCnTA-secure with respect to 7→.

a

and it is immediate that (r2 , v2 ) −→0 (r20 , v20 ). To show that (r2 , v2 ) ≈X (r20 , v20 ), we consider the cases of u ∈ Y and u = w ∈ X. In case u ∈ Y , we have

Before we present the proof of the next theorem, we present two useful lemmas.

As previously mentioned, given a sequence of actions α ∈ A∗ and a domain u ∈ U, srcu (α) is the set of domains that are allowed to produce causal interference to u. This definition can be extended for a set of domains X ⊆ U, by replacing srcu () = {u} with srcX () = X in the definition, and extending the inductive case by srcX (aα) = srcX (α) ∪ { dom(a) | ∃v ∈ srcX (α) (dom(a) 7→ v) }. The ip function with respect to a set of domains (instead of a single domain) can also be defined in a similar way. Given X ⊆ U, the intransitive purge function ipX : A∗ → A∗ is defined inductively by ipX () =  and, for a ∈ A and α ∈ A∗ , ( a · ipX (α) if dom(a) ∈ srcX (aα) ipX (aα) = ipX (α) otherwise. Given X ⊆ U, define a swappable relation 'X on A∗ by γ 'X γ 0 if there exist α, α0 ∈ A∗ and a, b ∈ A, such that γ = αabα0 , γ 0 = αbaα0 , and the following conditions hold: 7→

dom(a) dom(a)

7→

7→

∩ dom(b)

∩ srcX (abα0 ) = ∅ , 7→

∩ X = ∅ or dom(b)

(1)

∩X =∅ .

(2)

'∗X

Define as the transitive closure of 'X . Intuitively, condition (1) says that the actions a and b interfere with the group X through α0 at most via distinct causal chains. In other words, no individual domain in srcX (α0 ) can tell the ordering of a and b. Moreover, assuming that both dom(a) and dom(b) are in srcX (abα0 ) (as will be the case in our applications of this definition), neither of the domains dom(a) and dom(b) may interfere with the other. One consequence of condition (1) is that there does not exist a single domain u in X such that both dom(a) 7→ u and dom(b) 7→ u. Condition (2) states the stronger condition that the coalition X cannot directly observe the occurrence of both a and b, even via interferences on different domains in X. The following result generalizes a result for individual domains from [6]. Lemma 33 For all α, β ∈ A∗ and X ⊆ U, we have taX (α) = taX (β) ⇔ ipX (α) '∗X ipX (β) . Proof: To reuse the result (tau (α) = tau (β) ⇔ ipu (α) '∗u ipu (β)), introduce a fresh domain uX and an augmented policy 7→0 = 7→ ∪ (7→ X × {uX }), that is, v 7→0 uX iff v 7→ ∩X 6= ∅. It follows that taX (α) for 7→ equals tauX (α) for 7→0 . The definition of αabβ 'uX αbaβ in [6] is equivalent 0 0 to dom(a)7→ ∩ dom(b)7→ ∩ ({uX } ∪ srcuX (abβ)) = ∅. But this can be rewritten as 7→0

dom(a)

7→0

dom(a)

7→0

∩ dom(b)

7→0

∩ dom(b)

∩ {uX } = ∅ and

(3)

∩ srcuX (abβ) = ∅ . 7→

0

(4) 0

7→

(3) is equivalent to: uX ∈ / dom(a) or uX ∈ / dom(b) . By 7→ definition of 7→0 this is equivalent to dom(a) ∩ X 6= ∅ or 7→ dom(b) ∩X 6= ∅, i.e., (1) in the definition of 'X . To see that here (4) is equivalent to (2) observe that srcX (abβ) \ X = 0 srcuX (abβ) \ (X ∪ {uX }). Moreover, v 7→ \ {uX } = v 7→ .

7→

Hence, when (3) holds, (4) is equivalent to dom(a) ∩ X = ∅ 7→ or dom(b) ∩ X = ∅, i.e., (2).

Lemma 34 If taX (α) = taX (β) then srcX (α) = srcX (β). Proof: by induction on the length of α. Write a ∈ [α] if a appears in the sequence α. We apply the following characterization of src: srcX () = X, srcX (α · a) = srcX (α) 7→ if dom(a) ∩ X = ∅, and srcX (α · a) = srcX (α) ∪ src{dom(a)} (α) otherwise. Base case: suppose α = , then since taX (α) = taX (β), 7→ we have dom(b) ∩ X = ∅ for all b ∈ [β]. Therefore srcX () = X = srcX (β). Induction step: Suppose taX (α · a) = taX (β), we have the following cases. 7→ • If dom(a) ∩ X = ∅, then taX (α · a) = taX (α). Then srcX (α · a) = srcX (α), which equals srcX (β) by I.H. 7→ • If dom(a) ∩ X 6= ∅, then taX (α · a) = (taX (α), ta{dom(a)} (α), a). As taX (α · a) = taX (β), β must be of the form β 0 · a · β 00 such that taX (α) = 7→ taX (β 0 ), ta{dom(a)} (α) = ta{dom(a)} (β 0 ) and dom(b) ∩ X = ∅ for all b ∈ [β 00 ]. Now srcX (β) = srcX (β 0 · a) = srcX (β 0 ) ∪ srcdom(a) (β 0 ), and srcX (α · a) = srcX (α) ∪ srcdom(a) (α). Since taX (α) = tadom(a) (β 0 ) and tadom(a) (α) = tadom(a) (β 0 ), we have srcX (α) = srcX (β 0 ) and srcdom(a) (α) = srcdom(a) (β 0 ) by I.H., and then we have the result. Note that we also obtain from this result that if ipX (α) = ipX (β) then srcX (α) = srcX (β), because ipX (α) = ipX (β) implies taX (α) = taX (β) (for the proof see the extended version of [24]). Lemma 35 Let (∼u )u∈U be a GWU on M . Then, for all α, β ∈ A∗ , X ⊆ U and s, t ∈ S, if s ∼srcX (α) t, then for all runs r starting from s, 1) if Act(r) = α and ipX (α) = ipX (β), then there exists a run r0 starting from t such that Act(r0 ) = β and viewX (r) = viewX (r0 ), 2) if Act(r) = α and (1) ipX (α) = α (2) ipX (β) = β and (3) α 'X β, then there exists a run r0 starting from t, such that Act(r0 ) = β and viewX (r) = viewX (r0 ). Proof: 1) We proceed by induction on the combined length of α and β. For the base case, if α = β = , then s ∼srcX (α) t implies obsu (s) = obsu (t) for all u ∈ X ⊆ srcX (α), by OC, which gives viewX (r) = viewX (r0 ) where r0 is the empty run starting from t. For the inductive step, consider sequence α = aα0 and suppose that the claim holds for shorter sequences. a Assume r = s −→ r1 where first(r1 ) = s0 and 0 Act(r1 ) = α . We have the following two cases:

Suppose dom(a) 6∈ srcX (a · α0 ). By LR we have s ∼dom(a)67→ s0 . Then by assumption we 7→ have srcX (α) ∩ dom(a) = ∅, i.e., srcX (α) ⊆ 67→ 0 dom(a) . Then s ∼srcX (α) t, by transitivity, and obsu (s0 ) = obsu (s) = obsu (t) for all u ∈ X ⊆ srcX (α) by OC, which implies viewX (r) = viewX (r1 ). In this case, ipX (α0 ) = ipX (α) = ipX (β), so by the inductive hypothesis, there exists a run r0 starting from t such that viewX (r0 ) = viewX (r1 ), and the result immediately follows. 0 • Suppose dom(a) ∈ srcX (a · α ). Then ipX (α) = 0 a · ipX (α ), and for this to equal ipX (β) we must have β = b · β 0 for some action b. If dom(b) 6∈ srcX (b · β 0 ) then we may obtain the desired conclusion using the previous case (note that srcX (b · β 0 ) = srcX (a · α0 ) by Lemma 34). Consequently, we may assume without loss of generality that a = b and ipX (α0 ) = ipX (β 0 ). a By GWSC, there exists a transition t −→ t0 such 0 0 that s ∼srcX (a·α) t . Since X ⊆ srcX (a · α) it follows from this using OC that obsu (s0 ) = obsu (t0 ) for all u ∈ X. Since srcX (α) ⊆ srcX (a · α), we have s0 ∼srcX (α) t0 . By the inductive hypothesis, there exists a run r10 starting from t0 such that Act(r10 ) = β and viewX (r1 ) = viewX (r10 ). a By concatenating the transition t −→ t0 we have a a viewX (r) = viewX (s −→ r1 ) = viewX (t −→ a 0 0 0 r1 ). Thus, we have r = t −→ r1 as the desired run. 2) Write α = α1 abα2 and β = α1 baα2 with •

7→

dom(a)

7→

∩ dom(b)

∩ srcX (abα2 ) = ∅

with runs r1 , r2 and states s1 , s2 , s3 such that r = a b r1 −→ s2 −→ r2 with last(r1 ) = s1 , first(r2 ) = s3 , Act(r1 ) = α1 and Act(r2 ) = α2 . We also have the assumption that ipX (α) = α and ip(β) = β, and from this it follows that dom(a) ∈ srcX (aα2 ) and dom(b) ∈ srcX (bα2 ), so srcX (abα2 ) = srcX (baα2 ). Moreover, we must have dom(a)67→dom(b), since otherwise we will 7→ 7→ have dom(a) ∈ dom(a) ∩dom(b) ∩srcX (abα2 ), contrary to the assumption that this set is empty. Similarly we have dom(b)67→dom(a). We construct the required run r0 by the following three steps. a) There exists a run r10 from t to a state t1 with Act(r10 ) = α1 , viewX (r10 ) = viewX (r1 ) and s1 ∼srcX (abα2 ) t1 . b

a

b) There exist transitions t1 −→ t2 −→ t3 such b a a that viewX (t1 −→ t2 −→ t3 ) = viewX (s1 −→ b s2 −→ s3 ) and s3 ∼srcX (α2 ) t3 . c) There exists a run r20 starting from t3 with Act(r20 ) = α2 and viewX (r20 ) = viewX (r2 ). b

a

We let r0 = r10 −→ t2 −→ r20 . The first step and the last step can be straightforwardly proved by repeatedly

applying GWSC. We focus on the second step as follows: (i) Suppose there is a run r10 from t to t1 such that s1 ∼srcX (abα2 ) t1 , and viewX (r1 ) = viewX (r10 ). a (ii) By LR and s1 −→ s2 , we have s1 ∼dom(a)67→ s2 . (iii) By transitivity and (i)+(ii), we have s2 ∼srcX (abα2 )∩dom(a)67→ t1 . (iv) Since dom(b) ∈ srcX (abα2 ) and dom(b) ∈ dom(a)67→ , apply GWSC, there exists b t1 −→ t2 such that s3 ∼srcX (abα2 )∩dom(a)67→ t2 . b

(v) Again, by LR and t1 −→ t2 , we have t1 ∼dom(b)67→ t2 . (vi) By transitivity and (i)+(v), we have s1 ∼srcX (abα2 )∩dom(b)67→ t2 . (vii) Since dom(a) ∈ srcX (abα2 ) and dom(a) ∈ dom(b)67→ , apply GWSC, there exists a t2 −→ t3 such that s2 ∼srcX (abα2 )∩dom(b)67→ t3 . b

(viii) By LR and s2 −→ s3 we have s2 ∼dom(b)67→ s3 . (ix) By transitivity and (vii)+(viii), we have s3 ∼srcX (abα2 )∩dom(b)67→ t3 . a (x) By LR and t2 −→ t3 we have t2 ∼dom(a)67→ t3 . (xi) By transitivity and (iv)+(x), we have s3 ∼srcX (abα2 )∩dom(a)67→ t3 . (xii) By (ix)+(xi), we have s3 ∼srcu (abα2 )∩(dom(a)67→ ∪dom(b)67→ ) t3 . 7→ 7→ Now by srcX (abα2 ) ∩ dom(a) ∩ dom(b) = ∅, we have the relation srcX (abα2 ) ⊆ (U \ 7→ 7→ (dom(a) ∩ dom(b) )). Therefore, srcX (abα2 ) ∩ (dom(a)67→ ∪dom(b)67→ ) = srcX (abα2 )∩(U\(dom(a)7→ ∩ dom(b)7→ ) = srcX (abα2 ). Applying this to (xii) we have s3 ∼srcX (abα2 ) t3 . Combining this with step 2c there exists a run r20 such that viewX (r2 ) = viewX (r20 ). And by (i) we have viewX (r1 ) = viewX (r10 ). Next we show a b b viewX (s1 −→ s2 −→ s3 ) = viewX (t1 −→ a t2 −→ t3 ). Suppose there exists u ∈ X such that 7→ obsu (s1 ) 6= obsu (s2 ). Then we have dom(a) ∩ X 6= ∅. By Condition 2 of the definition of 'X , we have 7→ dom(b) ∩ X = ∅. Therefore by OC and (i) and (ii), obsu (s2 ) = obsu (s3 ) = obsu (t3 ), and obsu (s1 ) = obsu (t1 ) = obsu (t2 ) for all u ∈ X. The other case 7→ when dom(b) ∩ X 6= ∅ yields a similar conclusion, so a b that in both cases we have that viewX (s1 −→ s2 −→ b a s3 ) = viewX (t1 −→ t2 −→ t3 ).

Lemma 36 If ipX (α) = α and α 'X β then ipX (β) = β. Proof: by the definitions of ip and src.

Proof of Th. 27: Suppose there exist unwinding on M , and let X ⊆ U. For taX (α) = taX (α0 ), by Lemma 33 we ipX (α0 ). Note that by Lemma 34, we

a generalized weak all α, α0 ∈ A∗ , if have ipX (α) '∗X have srcX (α) =

srcX (α0 ). Since ∼u is an equivalence relation for all u ∈ U, we have s0 ∼srcX (α) s0 , and we show that for all runs r with Act(r) = α, there exists a run r0 such that Act(r0 ) = α0 and viewX (r) = viewX (r0 ). Since ipX (α) '∗X ipX (α0 ), by definition there exist action sequences α1 , α2 , . . . , αn such that α1 = ipX (α), αn = ipX (α0 ) and αi 'X αi+1 for all i ∈ {1, . . . n − 1}. By Lemma 36 we have ipX (αi ) = αi for all i ∈ {1, . . . n}. Next, by Lemma 35(1), there exists a run r1 such that Act(r1 ) = α1 and viewX (r1 ) = viewX (r). Then we have that for all i = 1 . . . n − 1, there exists a run ri+1 with Act(ri+1 ) = αi+1 and viewX (ri ) = viewX (ri+1 ), by applying Lemma 35(2) n − 1 times. And finally by Lemma 35(1), there exists a run r0 such that Act(r0 ) = α0 and viewX (r0 ) = viewX (rn ). Proof of Th. 30: We show the following stronger statement: if M is a system with structured state with local nondeterminism, satisfying LC-RM1–LC-RM3, and AOI, then the relations (∼oc u )u∈U are a generalized weak unwinding. The result then follows using Cor. 28. The proof that OC is satisfied is direct from LC-RM1. For a LR, suppose that dom(a)67→u and s −→ t. We show that s ∼oc u t by assuming x ∈ observe(u) and showing that s(x) = t(x). Since dom(a)67→u, by AOI, alter(dom(a)) ∩ observe(u) = ∅, so x 6∈ alter(dom(a)). By LC-RM3, we have that s(x) = t(x), as claimed. oc For GWSC, suppose that s ∼oc X t and s ∼dom(a) t and a 0 s −→ s . By CR2, there exists c ∈ n(a, s) such that s0 = r(a, s, c). Since choices are local, c is an Obj-indexed tuple with cx ∈ nx (a, s(x)) S for all x ∈ Obj. For X ⊆ U we write observe(X) for u∈X observe(u). Define the Obj-indexed tuple c0 as follows: for x ∈ observe(X) ∪ observe(dom(a)), we let c0 (x) = c(x). Otherwise, let cx be any value in nx (a, t(x)). We claim that c0 ∈ n(a, t). To show this, we need to show that cx ∈ nx (a, t(x)) for all x ∈ Obj. By construction, it suffices to consider the case where x ∈ observe(X) ∪ observe(dom(a)). Note that here we have s(x) = t(x), by the assumption that s ∼oc X t and s ∼oc t. Thus, we have c ∈ n (a, s(x)) = n (a, t(x)), x x x dom(a) as required. a Now let t0 = r(a, t, c0 ). By CR1, we have t −→ t0 . We show 0 oc 0 that s ∼X t . Consider u ∈ X and let x ∈ observe(u). We apply LC-RM2. Note that we already have that c ∈ n(a, s) oc 0 and c0 ∈ n(a, t) and s ∼oc dom(a) t. The antecedents c ∼dom(a) c 0 0 and c(x) = c (x) follow by construction of c . We also have that s(x) = t(x), from the fact that s ∼oc u t and u ∈ X and x ∈ observe(u). Thus, the antecedent of LC-RM2 is satisfied, and we deduce that s0 (x) = r(a, s, c)(x) = r(a, t, c0 )(x) = t0 (x). Since this holds for all u ∈ X and x ∈ observe(u), 0 we in fact have s0 ∼oc X t , as required.