Measuring IDS-estimated attack impacts for rational incident response ...

8 downloads 2431 Views 556KB Size Report
intrusion itself, they cannot provide meaningful information for alert prioritization ... fine-grained IDS alerts with coarse-grained security policies meanwhile avoiding the .... an enterprise network, or a database system. The system normality is ...
ARTICLE IN PRESS computers & security xxx (2009) 1–10

available at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/cose

Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach5 Zonghua Zhanga,*, Pin-Han Hob, Liwen Hec a

Information Security Research Center, NICT, Japan ECE Department, University of Waterloo, Canada c Security Research Center, BT Group CTO Office, UK b

article info

abstract

Article history:

Intrusion detection system (IDS) plays a vital role in defending our cyberspace against

Received 4 November 2008

attacks. Either misuse-based IDS or anomaly-based IDS, or their combinations, however,

Received in revised form

can only partially reflect the true system state due to excessive false alerts, low detection

9 March 2009

rate, and inaccurate incident diagnosis. An automated response component built upon IDS

Accepted 16 March 2009

therefore must consider the stale and imperfect picture inferred from them and takes action accordingly.

Keywords:

This article presents an approach for measuring attack impact with the evidence of IDS

Network security

alerts, with the objective to suggest rational response by cost-benefit analysis. More

Intrusion detection

specifically, based on a very realistic assumption that a system evolves as a Markov

Risk management

decision process conditioned upon the current system state, imperfect observation, and

Incident response

action, we use partially observable Markov decision process to model the efficacy of IDS as

Security evaluation

providing a probabilistic assessment of the state of system assets, and to maximize a reward signal (defined as a function of both cost and benefit) by taking appropriate actions in response to the estimated system states in terms of desirable security properties. The ultimate goal is to move the system to more secure states with respect to pre-specified security metrics, and assist system administrators to identify the best tradeoff between the cost and benefit of security policies. We finally use a benchmark data set to practically illustrate the application of our methodology and conduct a proof-of-concept validation on its feasibility and efficiency. ª 2009 Elsevier Ltd. All rights reserved.

1.

Introduction

IDS plays a paramount role in defending against cyber-attacks as the backup measures of such proactive security techniques as authorization, authentication, access control and so on. While its development has undergone nearly three decades, the gap between detection and response has never been filled well. An ideal IDS must be free of false positive and false

negative, detect intrusions with high accuracy, avoid excessive response time, and save computational cost. However, in practice, no such perfect system exists due to the inherent nonstationarity and increasingly complexity of today’s computer systems, the rapid emergence of new vulnerabilities and novel attacks, as well as the complicated human factors. To deal with the huge amount of alerts generated by IDS, various correlation techniques have been developed for

5

This paper subsumes our previous work (Zhang et al., 2007). * Corresponding author. E-mail address: [email protected] (Z. Zhang). 0167-4048/$ – see front matter ª 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.cose.2009.03.005

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS 2

computers & security xxx (2009) 1–10

exploring their causal relationship, which thus facilitate root cause analysis and the construction of attack scenarios for system administrator’s easier comprehension and incident analysis. However, since the techniques are solely developed on the top of intrusion alerts, and are focused on the attack/ intrusion itself, they cannot provide meaningful information for alert prioritization, intrusion impacts characterization, and response selection. As such, when an organization deploys IDS and uses its report as security reference for adopting countermeasures, it must be aware of the uncertain and possibly stale picture of the monitored system. From a holistic standpoint, intrusion detection ultimately aims at measuring system security. To the same end, the methodologies on security evaluation (Zhang et al., 2008) and risk assessment (Mu et al., 2008) play complementary roles. While system dependability and security share some common properties and the models for dependability evaluation have been developed well, security evaluation is still in an early stage (Nicol et al., 2004). This is largely due to the complicated human factor, which plays the most important role in transiting system states that may be involved during modeling process. As a typical tool for vulnerability analysis, attack graph assists us to gain insights into the potential attacks by identifying and enumerating system vulnerabilities (Schneier, 1999). But the elements of graphs are singular system vulnerabilities or their simple combinations, attack impacts cannot be easily examined or estimated, and human factors are essentially excluded. Moreover, most of such techniques are tailored to be system- or application-specific, and no system-level methodology exists that can quantify the amount of security provided by a particular system-level approach. As a result, they generally require much human effort and are infeasible in automated and real-time applications. Furthermore, the final goal of IDS is to assist system administrators1 to estimate implicit system states with respect to security properties, thus suggesting appropriate response. As a matter of fact, different organizations may differ in business goals and security requirements, they also have different amount of budgets for security investment and management. Thus, the cost factors associated with response deserve careful examination. In general, in this article, we address two major issues:  We intend to bridge the low-level IDS alerts with high-levelsystem states which are meaningful to and interpretable by the security administrator.  We integrate IDS reports with cost-effective response that is specified to system assets and business goals of an organization by defining a cost function. Specifically, the process of intrusion response by a security administrator can be regarded as a decision-making process, in which the goal is to achieve a suit of rational reactions. The decision-making framework then serves as a theoretical basis for developing a holistic methodology for tightly integrating fine-grained IDS alerts with coarse-grained security policies 1

We use system administrator and defender interchangeably in this article.

meanwhile avoiding the deficiency of IDS. In particular, IDS reports are taken as observations and are used to probabilistically estimate system states, which evolve upon the previous states and the cost-sensitive actions. The rational response consisting of a set of optimal actions is thus generated accordingly. The remainder of the article is organized as follows. We have an investigation on the related work in Section 2. An architecture is presented in Section 3, as well as the design assumptions, from a holistic perspective on our methodology. We then propose an approach to measuring attack impacts for rational incident response in Section 4. In Section 5, we report a benchmark data set-based experiment. Finally, conclusion is drawn in Section 6.

2.

Related work

The gap between IDS reports and high-level response has occurred along with the birth of IDS, but it has never been bridged well even though much effort has been paid, such as those works in security evaluation and risk assessment. Arnes et al. (2006) used Hidden Markov Models (HMM) to evaluate the risks of intrusions. With this model, the likelihood of transitions between security states can be represented, enabling the IDS alerts to be prioritized for risk assessment. An online risk assessment technique using D–S evidence theory was proposed by Mu et al. (2008). However, both of them limit to the system-level, and fail to provide a reference for suggesting security administrators to take cost-sensitive rational response. Also, partially observable Markov decision process (POMDP) has been used by Kreidl and Frazier (2004) and Zhang and Shen (2005), respectively, for developing autonomic intrusion detection models. The former work focused on feedback control mechanism, the response cost function was not general enough to leverage different system assets, and human factors were not included at all. The latter work aimed to correlate a number of parametric anomaly detectors, while attack impacts and their evaluation were out of the consideration. In addition to the system-level modeling and analysis, high-level intrusion response is attracting research attention as well. For example, Dewri et al. (2007) proposed an approach to achieve optimal security hardening by formulating a multiobjective optimization problem on attack tree models of networks. The motivation of this work is that a security administrator is impelled to select a subset of response measures with the limited budget so as to minimize the damage to the system in the presence of minor security flaws. This method has two shortcomings. First, it relies on attack trees. The deficiency of attack tree, such as intractability, error-prone, less scalability, is thus inherited. Some methods like attack graph ranking (Mehta et al., 2006; Sawilla and Ou, 2008) and probabilistic inference (Qin and Lee, 2004) are promising yet limited efforts for dealing with those drawbacks. Second, an evolutionary algorithm was used to solve a formulated optimization problem. It was deemed to be timeconsuming to achieve convergence, and was not suitable to real-time applications, even a reasonable latency cannot be guaranteed.

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS computers & security xxx (2009) 1–10

3.

3

Design assumptions and rationale

This section firstly gives several assumptions supporting our design, and presents an overview on the design rationale.

3.1.

Assumptions

To facilitate the design of our methodology, along with the specific models and algorithms, several assumptions must be stated. In particular, we assume a target system to be composed of a number of assets playing different functional roles. Also, we assume the target system is survivable, namely, the system availability cannot be totally compromised, and it can still provide key services in the presence of attack. But we do not need strict assumption on system confidentiality and integrity. Moreover, the compromised system is expected to be maintained and recovered by appropriate countermeasures. Furthermore, we assume that a target system is usually deployed with an IDS, and a number of IDS sensors is distributed across the system to monitor different system assets. However, we do not necessarily assume an alert correlator to be operating here even though such a component may provide more information for accurate state estimates. A desirable property of automated response mechanism is that it can tune the tradeoff between system maintenance cost and failure cost for achieving rational defense, allowing the interference of defender who may define cost function as different scenarios. In this article, we specifically use a variant of Markov decision-making process for modeling our design. Thus, an additional assumption supporting the model formulation is that system states is ergodic, and it evolves as a Markov process conditioned upon the states, observations, and actions.

3.2.

Design rationale

An architecture shown in Fig. 1 illustrates a holistic methodology for taking rational response based on IDS reports, which can be post-processed by alert correlation engine, providing evidence/observation for estimating system state (the counterpart technology like vulnerability analysis plays the same role), along with a priori knowledge on network assets. Then a set of actions is taken in accordance with predefined security metrics, as well as the business goals of an organization. The figure generally contains three key layers, which are interleaved each other:  System, which can be specified as a stand-alone computer, an enterprise network, or a database system. The system normality is then characterized by a variety of observable subjects, like system calls, privilege processes, applications, and resource usage patterns. Also, a system is usually armed with some proactive security mechanisms.  Middleware, which is essentially a software module coupling low-level observations reflecting system states in terms of particular security metrics (which can be obtained by IDS and vulnerability/risk assessment tools (Hariri et al., 2003)) and high-level response policy.

Fig. 1 – A holistic architecture for achieving rational response by measuring attack impacts via IDS reports.

 Human factor, which involves all the high-level securityrelated human behavior, such as investing security products, adopting security policies, and maintaining system performance. All the human factors should be specified with particular business goals and security requirement of the organization. More specifically, from a functional perspective, the architecture is implemented and composed of three components: events monitor and processor, system state estimator, and response actuator. In this article, the design basis of the architecture is IDS, which can be either host-based IDS (HIDS) or network-based IDS (NIDS), depending on the underlying system and distributed in different hosts. In addition to an IDS, alert correlator (Valeur et al., 2004) can be used for intrusion alerts aggregation, classification, and correlation, so as to provide more accurate assessment on the state of system assets. Built upon IDS sensors and alert correlation module, a middleware takes the reports as observations and model their efficacy for estimating implicit system states. The states can be characterized either in coarse-grained (confidentiality, availability, integrity, etc.,) or fine-grained (Quality of Service and specific security metrics) levels, which are usually specified by the security administrator in accordance with system configuration and security demand. Based on the estimated states, security policy is then adopted to move the system towards secure states. To achieve that, a critical issue is to define a cost function for correlating system maintenance cost consumed by defense and failure cost due to intrusion by relating system states with human factors and quantifying security properties in terms of predefined security metrics. In addition, it is obvious that our design does not suffer from scalability and adaptability issues. As the design architecture shown in Fig. 1, the middleware is transparent to the underlying systems, thereby leaving scalability and management complexity issues to lower system level. And it only needs to deal with IDS reports and measures attack impacts for estimating system states, thanks to the deployment of IDS sensors, a well-studied topic even in very large-scale highspeed networks. More importantly, the design provides a formal way for the interaction between system operator and

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS 4

computers & security xxx (2009) 1–10

underlying system, allowing a defender to update security policies, redefine cost factors without interference of lowlevel system configurations. Thus, the effort of defender would not be much demanded except some unavoidable management burden as network gets more complex.

4.

A holistic methodology

It is obvious that the application of such a methodology relies on specifying several elements: observations extracted from IDS, security metrics, human factors, and cost factors associated with response. Since our attention is limited to the models and algorithms for the middleware, the design is assumed to be transparent to IDS.

4.1.

Security metrics and their quantification

The rich history of the development of security technology suggests that building a perfectly secure system is a mission impossible in practice. Although much effort has been paid to quantitative measure of system security and risk assessment, most of reported techniques are tailored to be system- and application-specific, whereas a general and systematic methodology that can quantify the system in terms of particular security metrics has never been seen. A challenging issue must be tackled is to determine appropriate security metrics, which attract increasing research attention in both security and dependability domain (Bloomfield et al., 2006). To do that, in general, we firstly need to identify the candidate metrics for security and the characteristics that each metric must posses. We then must list the major classes of evidence for security and mapping of classes of evidence to metrics. The final step is to design candidate methods for combining the various classes of evidence toward the desired security attributes. Following the outline, we then consider a typical network scenario for illustration. As shown in Fig. 2, a network is composed of different network assets, e.g., web server, file server, database server, and a number of hosts. Also, the network is armed with firewall, network sniffer, and IDS. In order to specify security metrics of this network, two key observations must be considered:  Networks deployed in different organizations may differ in security demands and security properties. For instance, if Fig. 2 represents a campus network, the availability of database server and file server would be mostly concerned; if it is a bank network or military network, the confidentiality of communications between network elements and the integrity of database server would be primarily preserved.  A network usually consists of a number of assets playing different roles, such as database sever, Web server, and workstations in Fig. 2. Thus, a security administrator may deploy IDS sensors and prioritize alerts in terms of the significance of data source. The two observations naturally serve as a basis for quantifying security metrics. Specifically, similar to risk/threat assessment methodologies, some evaluation scores can be assigned to the network assets and services to measure their

Fig. 2 – A network example.

significance in the whole network. With the knowledge on network assets, attack taxonomies, and security mechanisms, we introduce a number of notations for quantifying security metrics: Ds, significance of network component or service; Da, threat degree of potential attack; Dd, cost for taking security measure. Thus, two coarse-grained security metrics can be defined as follows:  Costm ¼ Ds  Dd: maintenance cost due response.  Costf ¼ Ds  Da: failure cost due to attack.

to

intrusion

For example, in Fig. 2, we assume Ds(workstation) ¼ 1, Ds(Fileserver) ¼ 20, and Da(user) ¼ 1, Da(root) ¼ 10. Then an attacker with root privilege on File server is Costf ¼ 10  20 ¼ 200, while the same attacker with common user privilege on a workstation is Costf ¼ 1  1 ¼ 1. Although such metrics are site- and human-specific, they do help us to quantify the relevant costs and leverage their importance. As such, the intrusion impacts can be indirectly quantified by specifying the IDS alert reports originating from different data sources. Obviously, this pair of security metrics allows more finegrained security elements to be incorporated. For instance, risk state was defined by Mu et al. (2008) as an integration of risk index and risk distribution. In particular, risk index is used to measure threat degree (failure cost in our definition) to a protected target caused by an intrusion, and it particularly considers IDS alerts by specifying and integrating alert amount, alert confidence, alert type number, alert severity, and alert relevance score; risk distribution is used to quantify target importance (the significance of network asset in our model). Benefiting from such models, security metrics would be more accurately quantified. Also, a formal approach to evaluating security attributes by Butler (2002) can be extended and applied to refine our definitions as well.

4.2.

Modeling human factors

It has been commonly recognized that the introduction of human factors causes security evaluation to be a complicated and meticulous process. In particular, in system dependability evaluation, the quantitative models usually assume that system failures are caused by random events in hardware or rare events in software (accidental component failures), and this randomness can be quantified in a way that permits the

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS computers & security xxx (2009) 1–10

determination of system-level properties. However, this assumption does not hold for security evaluation, as intentional attacks always make system failures correlated each other in subtle ways even though such failures appear randomly to an external analyst. Thus, it is infeasible to apply the classical stochastic models which have been used to analyze system dependability (Nicol et al., 2004) to quantify system failures resulted by. Alternatively, we can evaluate system security in terms of attack impacts, which we believe can cover a large class of attack instances. To that end, associating attack impacts with security metrics (as we discussed in the previous section) is an elementary yet essential step. Moreover, the modeling of human factors would be incomplete if only attack impact (resulted by attacker) is considered. In practice, it should also take into account the actions of legitimate users and defender (or security administrator) due to their close interactions with attacker. This is an important observation that serves as a key to the modeling and analysis of the system security with respect to intrusion impacts and response policies. Intuitively, an attack-defense scenario can be regarded as a multi-player game upon a system, in which defender is goal-directed, i.e., ensuring particular security properties. In order to achieve a goal, a defender must take a set of actions based on her2 knowledge, meanwhile observes the system states estimated from attack impacts. Also, she must consider the defense cost by enforcing security policies, such as patching vulnerability, updating AV-software, upgrading authorization policy, as well as some cost due to reaction, e.g., server downtime, rout reconfiguration. To keep analysis to be concise and consistent, we treat all the costs associated with defense and reaction as maintenance cost, i.e., Costm. Therefore, a well-planned response policy can be viewed as a strategic decision process acted by a defender, integrating her goal, action, and cost together. Formally, an IDS-driven rational response can be characterized in terms of the following properties: Property 1 (goal-directed). A defender always intends to adopt the most effective countermeasure to mitigate intrusions for preserving the key security properties of a target system, and further transform the system states to be more secure. Property 2 (policy-dependence). The transitions of securityrelated system states are partially dependent on the security policy, i.e., a set of countermeasures, taken by the defender. Property 3 (cost-awareness). In a multi-stage attack, a defender is usually aware of the benefit to be achieved by taking certain countermeasures, as well as the cost to be paid incurred by both failure and maintenance of the system, thereby taking the optimal policy by balancing the tradeoff benefit and cost. The three properties imply that a defender in the context of security management would always take the most rational response according to the attack impacts estimated from IDS or any other security tools. A pair of more general human related security investment problems can be described as follows:

2

No gender implication.

5

 Given sufficient budget, a defender tends to adopt all the available security mechanisms for achieving the highest level security,  Given limited budget, a defender intends to enforce the most effective security policies by identifying the tradeoff between system failure cost Costf due to intrusion and maintenance cost Costm for taking response.

4.3.

A cost function for rational response

As discussed previously, it is generally recognized that the precise calibration of relevant factors associated with cost analysis in security/risk assessment is infeasible, which is largely due to the intrinsic complexity of today’s information systems, also because of the system-specific configurations and security policies or human-biased specification. A more realistic way is to figure out those elements that are essential to the security rules, models and policies, and then leverage them in a generic cost-sensitive manner with the tradeoff concerns. To that end, we must specify the behaviors of defender using quantitative or qualitative methods, and the metric quantification in Section 4.1 may achieve the goal to some extent. For a rational response policy, it has to leverage the maintenance cost Costm due to defense and recovery and the failure cost Costf due to intrusion, so as to achieve the best tradeoff between the two factors. As the cost-sensitive IDS model proposed by Lee et al. (2002), the result of an IDS at a particular stage must fall into one of the five cases: 1. 2. 3. 4. 5.

Case 1 (right detection): ‘‘alert’’ for a true attack; Case 2 (false positive): ‘‘alert’’ for a normal behavior; Case 3 (false negative): ‘‘silence’’ for an attack; Case 4 (normal case): ‘‘silence’’ for a normal operation; Case 5 (misdiagnosis): an attack is detected but misclassified as a wrong variant/stage.

The corresponding case-specific definition of the cost is shown in Fig. 3 (Case 4 is replaced by Case 5 in the figure as it is always 0) in an order of quadrants, where Costm denotes maintenance cost, Costf denotes failure cost, Costp is the penalty cost due to false alert, and Cost0 m is a maintenance cost due to wrong detection of attack variant. Since the cost of each stage in a multi-stage attack must fall into one of the four cases in Fig. 3, the cumulative cost, sum of costs of independent system states under attack, thus can be simply calculated as follows: 8 X   > n > > bi $Costm ðsi Þþai ð1bi ÞCostf ðsn Þ g0 Costs ¼ > i¼0 > > > <         þ Costp ðsi ÞþCostm s0i g1 þ Costm s0i þa0i $Costf ðsn Þ g2 > > > >P > 2 > > ð1Þ : k¼0 gk ¼ 1; ai1  ai ; 0 < i  n

where ai, a0 i ˛ [0, 1] are weights denoting the threat degree of an ongoing attack, parameter bi ˛ [0, 1] is used to balance the tradeoff between the failure cost and maintenance cost. Both of the two parameters can be configured as specific scenarios. For example, if a new worm outbreaks in the wild,

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS 6

computers & security xxx (2009) 1–10

Fig. 3 – Cost model: the relationship between maintenance cost and failure cost in different cases.

security administrator can adjust bi to a smaller value, implying that failure cost would dominate the cost function; ai can be also set as a greater value for easily observing the new worm’s infection behavior (assuming the infection behavior can be successfully detected). In addition, gi ¼ 1 if the corresponding case is selected, otherwise gi ¼ 0; s0 ;.;sn ˛S0 is a set of system states under attack, and s0 ˛S0 is a misdiagnosed state with true state s˛S0 . In addition, aiCostf(sn) can be replaced by Costf(si) if the weight ai of a particular state si cannot be determined.

4.4.

Building POMDP-based model

Obviously, a model is needed here to incorporate all those analyzed elements, namely security metrics, human factors, and cost function, and then assist response to achieve a rational tradeoff analysis between integrity, availability, latency, and potential attack impacts. As Fig. 1 shows, a rational defense would depend on the estimated system states, which are observed and derived probabilistically from IDS reports. Also, system evolves conditioned upon the alert reports (imperfect observations), estimated state, and defense action. With the three properties presented in Section 4.2, an anticipated action of the defender depends on the current observations and previous action/state, which is obviously a Markov decision process. Moreover, since the implicit states can be only estimated from the explicit observations, the decision process can be naturally formulated as a Partially Observable Markov Decision Process (POMDP) initiated by a defender. A POMDP model is structurally characterized as M ¼ {S, U, Z, R} (Aberdeen, 2003), where S ¼ {s0, s1, ., sn} is a set of system states, which cannot be observed directly by the defender. s0 represents the initial state, and si (i s 0) denotes a state associated with some particular security properties; Z ¼ {z1, z2, ., zq} is a set of observations emitted by a particular system state and monitored by the defender; U ¼ {u1, u2, ., um} is a set of actions in a response policy; a (possibly stochastic) reward r(i) ˛ R for each state si ˛ S, or in another sense, cost ci,j(u) for state transition from si to sj with a particular control u, which

is implied in Eq. (1). The state transitions, from si and sj, directed by the defender is generated as a Markov chain, i.e.,   si ˛S½nðsi Þ/zi ˛Z½mðzi Þ/ui ˛U pij ðui Þ /sj ˛S, where n($) is an observation distribution upon states, m($) is a probability distribution over actions with known zi. At each stage, an defender sees only the observations zi and the reward r(i), while she has no knowledge about the underlying system state space, how the actions affect the evolution of states, how the reward signals depend on the states, or even how the observations depend on the states. The objective of an optimal decision process, or a rational response, is to P maximize the reward, namely maxflimT/N E½1=T Tt¼1 rt g (where T is discrete time denoting the decision stages of defender). In another sense, a set of rational responses is expected to be taken to minimize the overall cost Costs in Eq. (1), or rt can be replaced by the negative Costs here. To construct a POMDP model for a particular scenario, we must define system states S, observations Z, actions U, and rewards C, as well as the transition probabilities n($) and m($). In practice, the accuracy of model construction relies on the expert knowledge of a security administrator and an extensive training data set. Based on the initialized model, the rational response thus can be taken in real-time (one-step delay due to such model’s essence) through two major steps:  Step I: estimating system states. Let Bt be system state estimate at stage t, which is an n-length vector (i.e., Bn is the dimensionality of state space), and the i-th element [v]i ¼ bt(i) ¼ Pr(sijst) represents the probability that the system is in state si at stage t. For bt(i), which is called belief state, can be calculated by Bayes theorem as follows, P nðzt1 jsi Þ k˛l bt1 ðkÞPrðsi jsk ; ut1 Þ P bt ðiÞ ¼ P 0 z0 ˛Z nðz jsi Þ k˛l bt1 ðkÞPrðsi jsk ; ut1 Þ

(2)

 Step II: taking rational response. Based on the estimated system state, a cost-awareness action at stage t can be taken, i.e., ut ¼ m(Bt), with the objective to maximize the reward in Eq. (1), i.e., " # X rt ðut Þbt ðsi Þ (3) m ðBt Þ ¼ argmaxut ˛U si ˛S

In order to figure out the action effect, we use rt(ut) to replace rt to denote that the expected reward can be achieved by taking optimal countermeasures at future decision stages given current system state si and action ut. Thus, a set of values rt(ut) is obtained due to the response actions that have been taken, and the current optimal action is derived by introducing it to Eq. (2). In particular, rt(ut) is scenario-specific, and can be determined a priori based on security administrator’s goal and experience. To achieve rational response in real time, probabilistic state vector Bt needs to be recursively calculated via Eq. (2) so that optimal action can be taken via Eq. (3).

5.

Design validation

The purpose of this section is to validate the feasibility of our proposed methodology and evaluate its performance for

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS computers & security xxx (2009) 1–10

measuring intrusion impacts based on the alert streams generated from a simulated network scenario.

5.1.

Scenario description

We performed the experiments using one of the MIT 2000 DARPA intrusion detection scenario-specific data sets, LLDOS 1.0 (MIT LLDDoS 1.0), with slight modifications. The simulation network is divided into three segments representing outside network, inside network, and DMZ of the organization Air Force Base. Network sniffers and IDS sensors were deployed for monitoring the sessions of both network and several particular hosts. This scenario contains a series of attacks launched by a novice attacker taking five attack phases: (1) IP sweep, the attacker sends ICMP echo-requests for live hosts; (2) Probe, the attacker probes live IP’s to look for the sadmind daemon running on Solaris hosts; (3) Break-In, the attacker comprise the system via the sadmind vulnerability; (4) Install Virus, in which a trojan mstream DDoS software are installed on three host using telnet; and (5) DDoS attacks. Additionally, the defender in this scenario was assumed to be naive. That says no actual response was taken until the system was compromised. While the network contains three subnets, we only examined the hosts in DMZ and inside networks. Table 1 shows the criticality Ds of different hosts, where ‘‘loud’’ is a router in DMZ, ‘‘solomon’’ is DMZ sniffer, the others are common hosts. Also, ‘‘mill’’ and ‘‘locke’’ are DNS server and sniffer in inside network, ‘‘pascal’’ is a vulnerable host. Note these assumed values in our experiments are only for quantifying the reward signal, so they do not necessarily reflect the actual values of assets in the original network. In addition, as the defender is native, we assumed some countermeasures accordingly. As the experimental network was under DDoS attack, which usually breaches system availability, confidentiality, and integrity, we classify the network into four coarse-grained states, and usually there are five steps to achieve the final goal, i.e., launching DDoS attack. However, since our methodology concerns more about security properties, we roughly classify system states into four coarse-grained categories, namely S ¼ {W, P, C, B}, where W represents the normal state, P denotes the state under probing and subject to the loss of availability, C denotes the state under exploitation and breach of both integrity and availability, and B denotes the state that system has been compromised with no security attribute preserved. Fig. 4 illustrates the multi-stage attack, along with implicit states, and attacker/defender actions. Also, Da and Dd (the numbers following each action) are given for each class of actions. In addition, we need to collect observations for estimating system states. In the experimental scenario, two NIDS sensors

and two HIDS sensors were deployed, and a set of response actions is assumed in Fig. 4. Thus, the observations of defender were mainly obtained from four IDS sensors. Formally, the observation can be represented in a form as Z ¼ (Z1, Z2, Z3, Z4), and each of them emits observations Zi ¼ {zt1, zt2, ., ztq} (i ¼ 1, 2, 3, 4), where ztj ˛ {silence, alert} denotes the report of a sensor at time t, which is updated periodically in terms of a Report Update Cycle (RUC). For each alert observation, further details would be examined. For simplicity, the criticality Ds of the host determines the priority of ID sensors. Furthermore, we must specify the probabilities n($) and m($) in the model for practically examining the transitions of system states. Unfortunately, the original data set did not contain any training data for state transitions with actual response; we then simply count the number of occurrences of all events represented by each conditional distribution using the available data and approximately obtain state transition probabilities. With the given notations, we formally specify the five cases of IDS reports as follows:     

Detection accuracy: Pr(zi ¼ alertj:W ). False positive rate: Pr(zi ¼ alertjW ). False negative rate: Pr(zi ¼ silencej:W ). Normal case: Pr(zi ¼ silencejW ). Misdiagnosis: Pr(zi ¼ alertj:W ) while :W is not correctly identified as P, C or B.

Clearly, a good estimate on Pr(zi ¼ alertjW ) may avoid wrong actions caused by false alarms in real-time response. In the original data set, detection accuracy of given IDS sensor reports was very high, we thus roughly assigned mi ˛ [0.8, 1.0].

5.2.

DMZ hosts

1. We examined the alerts reported in IDMEP3 format by NIDS sensors in DMZ and inside network (two HIDS sensors in ‘Mill’ and ‘Pascal’ were excluded). Considering the fact that no actual response were taken to cope with the alerts, we simply set Costm ¼ 0, while the specification of Costf depends on Ds which was determined by the destination address of alerts. 2. We set a ¼ (0, 1/3, 2/3, 1) in Eq. (1) to denote the attack progress associated with attack states W, P, C, B respectively, where bi ¼ 0 implies that no countermeasure was actually taken. 3. Since there was no false positive appeared in the alerts data (each attack session triggered an alert), the misdiagnosis case never occurred, that is, Costp ¼ 0. Thus, Eq. (1) can be simplified as the following equation: Costs ¼

Inside hosts

Measuring attack impacts

We then applied the constructed model to measure the intrusion impacts of the simulation network as a whole.

Table 1 – List of network assets. Hosts

7

n X

ai Costf ðsi Þ

(4)

i¼0

Loud Solomon Others Mill Pascal Locke Others 3

Ds

20.0

10.0

1.0

50.0

5.0

10.0

2.0

Which stands for Intrusion Detection Message Exchange Protocol as the specification in IETF.

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS 8

computers & security xxx (2009) 1–10

Fig. 4 – Multi-stage attack, implicit system states, and attacker/defender actions (along with respective action cost Da and Dd in parentheses.

5.3.

Rational incident response

Since the original data set does not contain actual defense mechanism, we assumed some simple countermeasures as shown in Fig. 4. To keep simplicity without loss of generality, we further assume that the defender actions ud can be generalized into three categories as passive, neutral, and positive, which are denoted as uP, uN, and uA, respectively. That means at time t the defender action ud must lie in one of the three action categories. A nature issue is that in order to take appropriate action, the defender needs to estimate system state from collected observations. Formally, an observation vector can be introduced here

to represent the detection results of IDS sensors. For example, at network state W, an observation vector derived from a networkud ¼ø ðHIDSIPÞ ¼ based IDS sensor HIDS@IP can be initiated as ZW ðaWW ; aWP ; aWC ; aWB Þ ¼ ð0:973; 0:023; 0:003; 0:001Þ, where ud, t ¼ ø indicates that no action is taken, aWW ¼ Pr(silencejW ) is the detection accuracy, and the sum of aWP, aWC, and aWB is false positive, i.e., Pr(alertjW ). Thus, a collected observation derived from all the IDS sensors can be integrated as a whole to estimate the states (bt(W ), bt(P), bt(C ), bt(B)). Similarly, ZuPd ; ZuCd and zuBd can be estimated as well under different countermeasure ud. Moreover, with a priori knowledge on DDoS attacks, parameter bi in cost function Eq. (1) can be set as follows: 8 b > > > 0 > < b1 bi ¼ b2 > > b > > : 3 b4

¼ 0:8; si ¼ W; ui ˛fuP ; uN g ¼ 0:2; si ˛fP; Cg; ui ˛fuP ; uN g ¼ 0:95; si sB and ui ˛uA ¼ 0:05; si ¼ B and ui ;uA ¼ 0:5; otherwise

(5)

where b0 and b1 is a pair of parameters used to balance the maintenance cost and failure cost due to inappropriate and

0

Defender’s Reward Signal

where n ¼ 3, representing totally four coarse-grained system states W, P, C, and B, respectively. The constructed model was then used to measure the intrusion impact with the input of original alert stream. As a result, we depicted the inferred reward signal in terms of Costs in Fig. 5, which shows that the reward signal underwent the most significant change at state C, where the value decreased from 3633.3 to 6780.0. State B also had a large negative reward 12,170.0 which implies that the network has been completely compromised at this spot. The reward then decreased sharply from the star point as time advanced. We see that one promising characteristic of the model is that it treats the network as a whole and measures the security in terms of the intrusion impacts associated with network assets by abstracting and representing the inter-hosts dependencies as high-level states. Naturally, the model facilitates some further analysis on those suspicious hosts which attribute to the sudden changes of the reward signal. Although the model is also capable of inferring more accurate response policies on the basis of more fine-grained system states, the deficiency of the data set impedes us to do so. To do that, the detailed alerts classification, defense mechanisms and potential impacts are required, Ds and Dd should be correspondingly redefined as well.

−2000

State W State P

−4000 −6000 −8000 State C −10000

α1=1/3 α2=2/3 α3=1 βi=0

−12000 −14000

0

State B

1000 2000 3000 4000 5000 6000 7000 8000

States Transitions in terms of Seconds Fig. 5 – Intrusion impacts with the progress of attack.

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS computers & security xxx (2009) 1–10

appropriate actions in {uP, uN}; b2 and b3 is another pair of parameters balancing the costs associated with inappropriate/appropriate actions in uA. Also, we set (a0, a1, a2, a3) ¼ (0, 1/3, 2/3, 1), and probability distribution n($) and m($) are kept unchanged as previously. We also set a time window RUC ¼ 30 s to periodically maintain and update the observation vector to deal with overwhelming IDS sensor reports (alert correlation technology can be used to enhance detection accuracy but ignored here). The network administrator would take actions with the original data generated under the previous ‘‘naive’’ response policy; therefore all the actions would have no actual effect on the observations. Due to the high-accurate IDS sensor reports, system states are estimated perfectly by the observations, so the results (as shown in Fig. 6) are very similar to those generated by a ‘‘heuristic’’ policy. The comparison between Figs. 5 and 6 indicates that expected security polices have potential to improve system survivability in the following way:  No response was taken at the first attack phase even though IDS sensor has alerted the intrusion. The reason is that potential maintenance cost would be much more than failure cost at that time. For example, at the 54th update period, the failure cost was 906, and the overall cost Costs was 1630.8, and any additional response would increase the overall cost.  At the 84th update period, actions in up were taken to mitigate the attack based on system estimator Bt. If the countermeasure had actual effects, the attack would have been successfully thwarted.  When the system evolved to state C, the significant update of reward signal strongly suggested the defender not only to initiate kill connection, but also to quickly diagnose the costly state by deleting anomalous payload. In state B, the defender would have definitely taken recovery actions no matter how much was the maintenance cost, since the failure cost dominated the sharp increase of overall cost Costs.

9

x 104

Reward Signals

7 6 5 4 3 2 1 0

50

100

150

200

250

300

Update Time Fig. 6 – Reward evolution with anticipated response (the lower line).

Discussion

The methodology we developed essentially serves as a middleware associating the low-level IDS alerts with high-level intrusion response, as the design rationale illustrated in Fig. 1. On the one hand, it can reduce the negative consequence resulted by IDS (false negative, false positive, nosy data, etc.) by combining the reports of a group of IDS sensors into one observation vector and treating them as a whole. Also, such a vector, along with network assets can determine the significance of intrusion alerts, enabling alert prioritization. On the other hand, the methodology provides an interface for a security administrator to specify countermeasures with the knowledge of implicit system states with respect to particular security properties, allowing a fundamental tradeoff between system maintenance cost and failure cost to be achieved via cost function (Eq. (1); economic models and analysis like the work by Anderson and Choobineh (2008) would be helpful for refining more effective cost factors and their relationships). One may argue that the application of such a methodology can be dramatically impeded due to the difficulty of specifying numerous parameters for POMDP model construction. Basically, our concrete example and experimental experience reported in the previous section have provided a guideline and convincing evidence for tackling that challenge: what the mostly necessary for a practical model construction is rich data set and an accurate specification of cost factors like network assets (as shown in Table 1), deployment of IDS sensors, prediction on attack variants (especially the stages), business goals (which determines the desirable security properties), and so on. More importantly, the model implies a way to enhance the usability of IDS technology by involving human factors. Although the suggested countermeasures in Fig. 4 are expected to be automated, their definition and specification (the value of Dd) are actually dominated by security administrator, who is supposed to be aware of the security demand of her organization and to understand the security policies that can/cannot be enforced. Therefore, in essence, human behaviors are somewhat integrated into the model, significantly facilitating the application of state-of-the-art risk assessment techniques.

6.

8

0

5.4.

9

Conclusion and future work

This article presented a holistic methodology for achieving high-level rational response on the basis of low-level IDS events. In particular, the adoption of countermeasures by a defender was formulated as a decision-making process, and the response was taken in accordance with the system states estimated from IDS alerts. The response of defender was rational in that a cost function associated with particular security metrics could be predefined according to the network assets and business goals of an organization, and the cost factors were absorbed into a reward signal for guiding the decision process. To construct a decision-making model, we specifically addressed the high-level quantification of security metrics, modeling of human factors, and specification of cost-sensitive response. A POMDP-based framework was then used to integrate those elements together.

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005

ARTICLE IN PRESS 10

computers & security xxx (2009) 1–10

Our future work lies on two directions. Firstly, we intend to design practical methods to guide the construction of and provide input parameter values for models, making the design automated and more usable. Secondly, we would pay more effort on the quantification of security metrics and the definition of cost functions. Also, it is our hope to apply our methodology to more practical environments.

references

Douglas Aberdeen. A survey of approximate methods for solving partially observable markov decision processes. National ICT Australia report, Canberra, Australia; 2003. Anderson EE, Choobineh J. Enterprise information security strategies. Computers & Security 2008;27:22–9. Arnes A, Valeur F, Vigna G, Kemmerer RA. Using hidden Markov models to evaluate the risks of intrusions: system architecture and model validation. in: Proceedings of the ninth international symposium on recent advances in intrusion detection (RAID 2006); 2006. p. 145–64. Butler SA. Security attribute evaluation method: a cost-benefit approach. in: Proceedings of ICSE’02; May 2002. p. 232–40. Bloomfield RE, Guerra S, Miller A, Masera M, Weinstock CB. International working group on assurance cases (for security). IEEE Security & Privacy May/June 2006;4(3):66–8. Dewri R, Poolsappasit N, Ray I, et al., Optimal security hardening using multi-objective optimization on attack tree models of networks. in: Proceedings of ACM CCS’07; 2007. p. 204–12. Hariri S, Qu G, Dharmagadda T, Ramkishore M. Vulnerability analysis of faults and attacks in large-scale network. IEEE Security & Privacy Magazine 2003;October& November Issue. Kreidl OP, Frazier TM. Feedback control applied to survivability: a host-based autonomic defense system. IEEE Transactions on Reliability March 2004;53(1):148–66. Lee W, Fan W, Miller M, Stolfo SJ, Zadok E. Toward cost-sensitive modeling for intrusion detection and response. Journal of Computer Security 2002;10(1–2):5–22. Mu CP, Li XJ, Huang HK, Tian SF. Online risk assessment of intrusion scenarios using D-S evidence theory. In: Proceedings of ESORICS; 2008. p. 35–48. Mehta V, Bartzis C, Zhu H, Clarke E, Wing J. Ranking attack graphs. in: Proceedings of RAID 2006; 2006. p. 127–44. MIT LLDDoS 1.0, Available from: . Nicol DM, Sanders WH, Trivedi KS. Model-based evalution: from dependablity to security. IEEE Transactions on Dependable and Secure Computing Jan–Mar 2004;1(1). Qin X, Lee W. Attack plan recognition and prediction using causal networks. in: Proceedings of ACSAC 2004; 2004. p. 370–9. Sawilla RE, Ou X. Identifying critical attack assests in dependency attack graphs. in: Proceedings of ESORICS, LNCS; 2008. p. 18–34. Schneier B. Attack trees. Dr. Dobb’s Journal 1999. Valeur F, Vigna G, Kruegel C, et al. A comprehensive approach to intrusion detection alert correlation. IEEE Transactions on Dependable and Secure Computing July–September 2004;1(3): 146–69.

Zhang Z, Shen H. Constructing multi-layered boundary to defend against intrusive anomalies: an autonomic detection coordinator. in: Proceedings of IEEE/IFIP DSN’05; June 2005. p. 118–27. Zhang Z, Lin X, Ho P-H. Measuring intrusion impacts for rational response: a state-based approach. in: Proceedings of IEEE the second international conference on communications and networking in China, (CHINACOM’07); Aug 2007. p. 317–21. Zhang Z, Nait-Abdesselam F, Lin X, Ho P-H. A model-based semiquantitative approach for evaluating security of enterprise networks. in: Proceedings of ACM symposium on applied computing (SAC); 2008. p. 1069–74.

Zonghua Zhang is currently working at NiCT of Japan. Before joining NiCT in April 2008, he was a postdoctoral researcher at INRIA Lille – Nord Europe, France. Prior to that, he spent over one year, from June 2006 to June 2007, as a postdoctoral fellow at the University of Waterloo, Canada. Zonghua obtained his Ph.D. in Information Science from Japan Advanced Institute of Science and Technology (JAIST), M.Sc. degree in Computer Science and B.Sc. degree in Information Science from Xidian University, China, in 2006, 2003 and 2000, respectively. His research interest covers a variety of security issues in computer systems and networks (both wired and wireless), with emphasis on anomaly-based intrusion detection, network forensics, analytical models and evaluation/verification of system security. He is now working toward rootcause analysis of malware-driven anomalies in computer networks. Pin-Han Ho received his B.Sc. and M.Sc. degrees from the Electrical Engineering Department in National Taiwan University in 1993 and 1995, respectively, and Ph.D. degree from Queen’s University at Kingston at 2002. He is now an Associate Professor in the Department of Electrical and Computer Engineering, University of Waterloo, Canada. He is the author/co-author of more than 150 refereed technical papers, several book chapters, and the co-author of a book on optical networking and survivability. His current research interests cover a wide range of topics in broadband wired and wireless communication networks, including survivable network design, wireless Metropolitan Area Networks such as IEEE 802.16 networks, Fiber-Wireless (FIWI) network integration, and network security. He is the recipient of Distinguished Research Excellent Award in the ECE Department of University of Waterloo, Early Researcher Award (Premier Research Excellence Award) in 2005, the Best Paper Award in SPECTS’02, ICC’05 Optical Networking Symposium, and ICC’07 Security and Wireless Communications symposium, and the Outstanding Paper Award in HPSR’02. Liwen He is currently working as a Principal Security Researcher at Security Research Center, BT Group CTO Office, UK. He received his Ph.D. degree from University of Sheffield, UK.

Please cite this article in press as: Zhang Z et al., Measuring IDS-estimated attack impacts for rational incident response: A decision theoretic approach, Comput. Secur. (2009), doi:10.1016/j.cose.2009.03.005