Network Intrusion Detection Automated and Manual Methods Prone to Attack and Evasion To gain a better understanding of network intrusion detection systems and their limitations, the authors examine intrusion detection techniques, introduce evasion techniques, and suggest methods for improving the trust relationship between the server response and the analyst. DAVID J. CHABOYA, R ICHARD A. RAINES, R USTY O. BALDWIN, AND BARRY E. M ULLINS US Air Force Institute of Technology
ntrusion detection is the art and science of finding compromises or attempts to compromise a network or computer system’s integrity. The term has been broadened to include the detection of other forms of attacks, such as scanning, enumeration, and denial-ofservice (DoS). A network intrusion detection system (NIDS) monitors the traffic on an entire network to determine if an attack or intrusion has occurred. Although intrusion detection technology has improved significantly over the past decade, it’s still relatively immature. One weakness of NIDSs is that although they identify attacks, they rarely identify whether the attacks succeed or fail. The success–failure determination is left to the analyst or system administrator. In cases in which the NIDS doesn’t identify the attack’s outcome, the analyst must instead review the network data, run vulnerability scans, or manually check the system for patches or signs of compromise. Although checking patches might be appropriate for small networks, it isn’t always the most practical choice, particularly with a high volume of attacks (caused by network worms, for instance) or large, distributed networks. Researchers have historically focused extensively on the machine component of intrusion detection, largely ignoring the human component. Yet, a constant struggle exists between attackers seeking to evade detection and IDS developers and network security analysts trying to defend their networks against them. This lack of research into the human component puts defenders at a disadvantage because evasion and attack confirmation methods aren’t fully appreciated. In this article, we describe common intrusion detection techniques, NIDS evasion methods, and how NIDSs detect intrusions. Additionally, we introduce new
PUBLISHED BY THE IEEE COMPUTER SOCIETY
evasion methods, present test results for confirming attack outcomes based on server responses, and propose a methodology for confirming response validity.
Intrusion detection techniques It’s important to note that intrusion prevention systems (IPS) and host-based intrusion detection systems (HIDS) have significantly more automated characteristics than NIDSs. Instead of relying on analysts to make decisions, IPSs and HIDSs try to block attacks outright. However, NIDSs are still prevalent due to their lower cost and greater ability to strategically identify attacks relative to HIDSs. Likewise, network IPSs must often pass suspicious traffic through because of the danger of causing DoS attacks on themselves. There are two well-recognized areas of detection: • Misuse or signature-based detection focuses on known attacks or the known characteristics of attacks by pattern matching on a predefined byte sequence. • Anomaly-based detection seeks to establish a normal baseline and then search for traffic that differs from it. Many forms of anomaly detection exist, including statistical, data mining, machine-learning, immunitybased, and information-theoretic approaches.1 Current NIDSs combine elements of both misuse and anomaly-based detection to form a complete system.2 NIDSs scan traffic going to and from the protected network for malicious activity. When the system detects a security violation, it triggers an alert that contains infor-
1540-7993/06/$20.00 © 2006 IEEE
IEEE SECURITY & PRIVACY
mation such as type of attack, destination port, and IP address. It’s then up to the security analyst to determine the alert’s relevance and the attack’s outcome. For example, a NIDS might pattern match on a signature such as “9090909090” that would detect a common no-operation (NOP) sled. Unfortunately, without additional connection data or information about the target computer, the analyst would have difficulty determining the attack’s outcome or if the alert was a false positive. These examples highlight a common mode of NIDSs operations called real-time analysis. The benefit of real-time alerts is that the NIDS detects and responds to intruder actions immediately, potentially mitigating damages. For more detailed offline processing, some NIDSs are capable of saving connection data (often independent of signatures). The security analyst examines this data to determine if malicious activity occurred. For example, the analyst could detect a network worm by recognizing an increase in traffic on a well-known port. Offline processing is beneficial for several reasons: new attacks can be detected, evasion isn’t as effective because signatures aren’t always required, and processing can be complex, often providing more useful information to the analyst.
Detection with context Contextual signatures extend NIDSs alerting with techniques such as understanding the network and matching on server-response traffic.3 Many NIDSs, such as Snort (www.snort.org/docs) and Bro,4 provide additional connection data that let analysts look for signs of backdoor connections or reply traffic. Analysts use the server responses in many cases to determine if the attack was successful. For example, a malformed request for a Web resource might return a “HTTP/1.1 400 Bad Request,” indicating that the attack failed. Techniques such as passive–active fingerprinting enable NIDSs to determine which operating system the attacker has targeted. Active verification is a method for reducing irrelevant alerts by actively probing an attacked system to determine if it’s vulnerable.5 For instance, analysts use Nessus (www. nessus.org/documents) scripts in conjunction with Snort to determine if an attacked computer is vulnerable, not vulnerable, or if the alert is unverifiable. Another method for determining possible attack outcomes uses the target’s vulnerability profile to determine if an exploit is even of interest.3 For example, an Apache exploit directed against an Internet Information Server (IIS) will always fail and can be disregarded. Request–reply pattern matching isn’t a new concept, but little documentation exists on it in academic literature. To our knowledge, no open source literature addresses the server response and its validity with respect to buffer-overflow attacks. Bro, an open source NIDS, includes functionality for matching both sides of network traffic streams (in/out bound), especially HTTP traffic.
Snort is also capable of this type of matching using the flowbits-detection plug-in, which allows stateful analysis and dependencies between rules. We discuss further applications of server-response matching later.
How NIDSs detect intrusions Most NIDSs require a synergy between machine and human. The computer needs the human to analyze the result of its collection, and the human needs the computer to collect the data and provide processing and analysis. The human component’s role has received little attention and is far more important than many realize.6 The problem with most NIDSs is that they leave too much work for the analyst, particularly in determining the attack’s outcome. To do so, the analyst uses two types of verification: immediate or delayed. Immediate verification methods include: • An obvious attack occurs. • Analyst reviews server response. • Analyst checks network configuration, requiring system knowledge. • Automatic or manual active alert verification. Delayed verification methods include: • Analyst manually checks logs, patches, and so on. • Analyst checks backdoor traffic. • Analyst uses anomaly detection such as traffic analysis and data mining. The immediate techniques are more commonly used. Based on previous public attacks and exploit code, analysts assume that if an attack is successful, the attacker will immediately take action against the target system. In most cases, analysts use the server-response method to determine an attack’s outcome because it’s often assumed to be trusted and outside of the attacker’s control. However, most NIDSs leave this review up to the analysts—a weakness that leaves them vulnerable to evasion attacks. Instead of placing some burden on IDS developers, analysts
Although attackers want to evade detection, their ultimate goal is to conceal their intrusions [...] the attack [must] appear to have failed. must have in-depth knowledge of server-response behavior and be capable of distinguishing legitimate responses from forged ones. The third immediate verification method requires that the NIDS or analyst have prior www.computer.org/security/
IEEE SECURITY & PRIVACY
Table 1. Experimental server response results. EXPLOIT
PATCHED SERVER RESPONSE
Apache Chunked IIS_WebDAV IIS_Nsiislog IIS_Printer IIS_Fp30Reg LSASS
n/a 03-07 03-19/03-22 01-23 03-51 04-11
None None None or 500 Server Error None None None
HTTP/1.1 400 Bad request HTTP/1.1 400 Bad request HTTP/1.1 400 Bad request None HTTP/1.1 500 Server error WinXP: DCERPC fault Win2K: LSA-DS response Remote activation response
542 235 111 n/a 258/261 WinXP: 92 Win2K: 108 92
knowledge of the target system’s configuration. Although effective, several downsides to this method exist, including the extra expense of maintaining and updating a database of the network configurations and configuration data that might be old or insufficiently detailed to determine attack outcomes, for example. Active alert verification not only reduces irrelevant alerts but also help confirm outcomes. It’s important that this process occur immediately; otherwise, the attacker might patch the server and make it appear not vulnerable. If the time delay between the scan and the compromise is too long, the attacker can install a full legitimate patch and reboot the system or hot patch the server to return a forged response when scanned. Many other techniques can help determine when an intrusion occurred—even several hours, days, or weeks after an attack. Analysts or system administrators often check logs or patches if they suspect an attack might have succeeded. However, this method is very time-consuming and vulnerable to system tampering. Another way to determine whether a system has been compromised is through backdoor signatures, which check for traffic on unusual ports or match on commands an attacker is likely to use. Unfortunately, encrypted backdoors and covert channels reduce this technique’s effectiveness. Finally, anomaly-based detection methods such as data mining, correlation, and traffic analysis are particularly effective in recognizing unauthorized network traffic or suspicious user activities resulting from intrusions.
Enhancing network forensics Our research uses the Metasploit Framework (www. metasploit.com) and a test network of Windows computers to determine server response to buffer overflows. In six of the seven TCP buffer-overflow exploits tested, we found that patched servers returned unique application-layer responses. User Datagram Protocol (UDP) attacks are, by definition, connectionless and therefore aren’t good candidates for response analysis. The results in Table 1 are consistent across Windows 2000 and XP service packs. A response of “none” means that we received 38
IEEE SECURITY & PRIVACY
either no packets or only a transport-layer response (such as an ACK packet). The error message’s size is a useful factor for determining the legitimacy of a response, which we discuss later. Although this sample of all exploits is small, it includes many well-known remote Windows buffer-overflow vulnerabilities (www.microsoft.com/ security/bulletins/). The results demonstrate that the server response can give clear evidence of the target system’s configuration. In addition, the consistency of response makes it ideal for both anomaly-based and misuse detection applications. For example, developers could create specific signatures that matched exploit attempts with expected responses (using Snort’s flowbits plug-in, for example). In the absence of an anticipated response, the IDS could generate an alert for a successful attack. The more general case of a missing application-layer response could indicate to an anomaly-based detection system that the attack might be successful. The performance impact and the possibility of dropping the response packet are considerations when implementing these techniques. A postprocessing or offline analysis solution would probably be ideal.
NIDS evasion techniques Attackers break into computer systems for a variety of reasons; sometimes it’s just for fame and notoriety (such as defacing a Web site) with no concern for stealth. Yet, more advanced attackers might want to steal confidential information or use the compromised system to gain further network access and thus want to remain undetected for as long as possible. If the attacker evades detection during the initial compromise, the chances of remaining undetected improves by using tools such as rootkits that hide the attacker’s presence. Originally, attackers had to worry primarily about hiding from system administrators. However, with the advent of IDSs, attackers invented new methods to keep attacks under the radar. To aid in understanding the threat, we discuss basic evasion principles, relevant techniques (in particular, polymorphic shellcode), and evasion attacks against the analyst.
A few basic evasion principles apply to NIDSs. First, the attacker can cause the IDS to process packets differently than the target system.4,7 This attack works because the IDS often resides on a different part of the network or on a different operating system. Second, the intruder can overload the IDS or analyst so that the attack packet is either dropped or missed in a flurry of alerts. For example, alert-flooding tools, such as IDS stimulators, are designed to generate considerable network traffic specifically tailored to fire alerts in NIDSs.8 Third, the intruder can modify or encode the attack to exploit processing differences between the IDS and target application. To bypass a NIDS signature looking for directory traversal (for example, “/”), the attacker replaces the “/” with its hexadecimal equivalent, “0x2f”. Next, the attacker can develop a new attack that the IDS isn’t yet programmed to detect, called a 0-day attack. For instance, the MS03-007 ntdll.dll vulnerability (http://support.microsoft.com/kb/815021) was first discovered after a backdoor signature alert. Finally, the intruder can attempt to hide the attack as normal activity, a less serious attack, or an entirely new attack.9 For example, mimicry attacks target HIDSs by modifying the exploit characteristics to mimic those of a legitimate application.10,11 Additionally, in Snort versions before 2.0.0, Darren Mutz and his colleagues found that it was possible to trigger mismatched alerts, which could be mistaken for lesser alerts such as port scans, by making an exploit appear to be one that occurred higher in the rule set.8
Polymorphic shellcode Attackers originally customized many buffer-overflow exploits for the Unix operating system to provide a command shell on the target computer. The art of constructing more compact, complex, and functional shellcode has advanced significantly since then.12 Shellcode can be small, simple, and fairly benign—for example, it could display a message box to the victim or install covert backdoors, modify key operating system files, add unauthorized users, or obtain command line and interactive access to the victim’s computer. Figure 1 shows a typical payload with the following four main sections executed in sequence: return addresses or jump to extended stack pointer (ESP), NOP sled, decoder, and shellcode. First, the attacker overwrites the saved instruction pointer with the address of a jump ESP instruction. This is just one of the many Windows overflow techniques that let an attacker cause control flow to pass to the shellcode. The next section, the NOP sled, consists of a series of no-operation commands that do nothing but pass execution to the next command, removing the need to know exactly where the payload is loaded in memory because a jump into any portion of the sled will eventually lead to the payload. Exploit writers often encode their payloads to either remove bad charac-
90 90 90 81 75 e5 16 4d 91 fc e8 fc 90
90 90 90 73 8a 70 70 98 38 b0 81 11 90
90 90 90 17 82 80 88 89 f6 23 1b 1e 90
90 90 90 9d 4b 3f 87 8f eb 87 35 d0 90
90 90 90 73 62 d2 5d c9 81 5d e0 16 03
90 90 d9 7d 13 6b ea 57 70 17 71 8f 6d
90 90 ee b4 f6 f6 81 55 a0 f6 78 2d dc
90 90 d9 83 d8 eb 18 c1 3f f4 12 04 77
Address of JMP ESP
Figure 1. Buffer-overflow sections. Control flow passes from the overwritten return address to the no-operation sled, decoder, and shellcode.
ters that would cause the exploit to fail (a buffer terminating null or “00”, for example) or to evade signaturebased detection. The pseudocode below describes a simple example using an exclusive-OR operation (XOR) to encode: for i < size of shellcode xor byte[i] with 0x95 increment counter i loop to top
The attacker places the decoder in front of the encoded shellcode which reverses the above process during execution, letting the target computer process the attacker’s code. Finally, the shellcode executes, now in a decoded state. The attacker must convert the payload to machine code for it to run on the target computer. For example, 90 is the hexadecimal value for the default NOP in the Intel x86 architecture. The IDS attempts to pattern match on certain parts of these known payloads. Attackers use polymorphism to evade such detection by borrowing techniques from computer virus developers to create unique yet functionally equivalent shellcode.13 Advances in polymorphic shellcode generation have made even anomaly-based detection methods such as data mining difficult in locating buffer-overflow attacks.14 Indeed, polymorphism has led many developers to focus on detecting exploit vectors— the methods in which the vulnerability is triggered, for example—rather than the shellcode.
IDS analyst evasion We previously highlighted that most NIDSs leave determining the attack outcome to the analyst. Yet, a larger— and substantially overlooked—problem is the possibility of evasion attacks against the analyst. Although attackers want to evade detection, their ultimate goal is to conceal their intrusions. To trick the anwww.computer.org/security/
IEEE SECURITY & PRIVACY
alyst, the attacker must make the attack appear to have failed. This requires an understanding of failed intrusions’ characteristics. Often an attacker will either fail to obtain an interactive session or will be unable to connect to a backdoor. For example, the analyst might not see any network traffic after the intrusion attempt, or might not see the attacker send multiple SYN packets to a known backdoor port and the targeted server reply with a RST ACK, indicating that the port is closed. These first two characteristics provide both a limitation and a weapon. A limitation is that the attacker can’t use any type of backdoor that leaves a network trace immediately after the attack. However, an attacker might use analysts’ tendency to look for signs of failed intrusions against them by sending SYN packets to a fake backdoor. Another characteristic that indicates that an attack has failed is the presence of a patched-server response. Although it’s true that the server response can be trusted in association with simple attacks such as password guessing or directory traversals, buffer overflows are exceptions. Because code is executed against the remote system (often as root or administrator) in most cases, nothing prevents the attacker from forcing the patched server to return the response that the IDS or security analyst expects. To forge responses, the attacker needs a server-socket handle for the connection in question. One option is to create a handle using raw sockets and creating a forged packet. To do this, the attacker must manually construct the IP and TCP headers and insert the false response data. Whereas the IP header is easy to construct, creating a reliable TCP header isn’t as straightforward. The attacker must capture the initial sequence numbers and then calculate the checksum and acknowledgment sequence number. Given that the attacker is part of the session and has prior knowledge of the attack’s size, this is possible, but this method is still less than ideal due to shellcode size (greater than 350 bytes) and the challenge of creating an authentic packet. The other option is for the intruder to locate and reuse the existing socket handle or equivalent. Three methods to achieve this are possible: • locating the peer or source port (ﬁndsock), • sending and recognizing a hardcoded tag (ﬁndtag), or • finding the connection identifier. In the ﬁndsock option, the attacker uses the getpeername call to “determine the endpoint associated with a given socket.”12 If the source port matches that of the attacker, forging a response packet means placing the required data and the located socket handle on the stack and then calling send. When optimized and hardcoded to a specific service pack, this process requires only 40 bytes. To find the address of the socket call, for example, service-pack independent code might locate ker40
IEEE SECURITY & PRIVACY
nel32.dll and then use loadlibrary on ws2_32.dll. Instead, we could simply load the address (for example, 7503c312 for Windows 2000 SP3) into a register and then call that register. In the ﬁndtag option,
the attacker enumerates socket descriptors to determine the amount of data pending in the network’s input buffer that can be read from the socket. If data is pending, the attacker calls recv to compare the hardcoded tag with the one sent. Although ﬁndtag requires an additional packet, it also works through network address translation (NAT) devices, unlike ﬁndsock. A limitation of both the ﬁndsock and ﬁndtag methods is that in some cases—Internet Information Server (IIS)—the exploited process doesn’t own the socket. For these attacks to be successful, the forging code must be injected into the correct process and then executed in the context of that process.15 This form of process injection requires numerous API calls (conservatively, at least 255 bytes) to locate the correct process and then inject and execute the required code. Instead of process injection, in certain Internet Server Application Program Interface (ISAPI) overflows, it’s possible to locate the connection identifier and use ISAPI functions to forge a message. However, this method also requires considerable size. Finally, in many cases, attackers can’t reuse default APIs to generate the response packets; instead, they have to forge the entire error message and must include the expected error string in the overflow code (of default size shown in Table 1). The shellcode that forges the response must also install a backdoor, such as adding a new account or binding a command shell to a TCP port. All other forms of backdoors—connect back or stage loading—are obvious to network analysts, assuming the NIDS collects the necessary traffic. Although an attacker could attempt to install a very tiny, time-delayed backdoor, the possibility that the socket handle might be unavailable makes this attack impractical. Next, the attacker can simply delay connecting to the backdoor long enough for the analyst to dismiss the attack as failed. Consider the Ethereal (www.ethereal.com) network capture in Figure 2. The first three packets are simply the three-way handshake, which sets up the connection between the attacker and victim. When expanded, an analyst would see a typical buffer-overflow attack in packet 4. The server’s response is in packet 5. At first glance, the server appears to have denied the request, but the response is actually forged and the attacker now has control over the system. This example highlights a problem with the manual evaluation of alerts. Because the attacker can make the response 100 percent authentic, there’s no way to tell the outcome based solely on the response. Even custom error messages can be duplicated, although shellcode size constraints and custom error pages’ typically large size might make it difficult to do so.
Improving response trust It would be beneficial to use the information that the server response provides and still have a guarantee of authenticity. Additionally, it would be useful to automate this process so that verification is faster and less error prone. By itself, however, the response isn’t trustworthy. If the NIDSs were programmed to simply check for the right response, attackers could easily evade detection outright. It turns out that the key to validating response lies in the attacker’s shellcode. We can use three techniques to analyze the shellcode to determine a response forgery.
Reverse-engineering the shellcode A possible solution stems from current methods that analysts use to determine the shellcode’s function. If the shellcode has functionality only to display a message box, then the response can’t possibly be forged. To determine this, however, the analyst would first need to determine the encoding technique and then decode the shellcode and reverse-engineer it to determine the functionality. Although this is certainly possible, it’s unlikely that the average IDS analyst has the skills or time to accomplish this task.
Cataloging known shellcode A more practical method is to catalog known exploit shellcode. Because exploits that are posted to both security and hacker sites are static—they’re usually compiled and executed with no changes—the attacker’s payload can be matched to one that occurred in a prior exploit. Analysts can predetermine the public shellcode’s function prior to the attack, so it’s easy to know if forging is possible. Therefore, a patched server response combined with known exploit shellcode is sufficient evidence that the target is indeed patched. If Snort detected an attack and a patched response, it could send the session in question to a shellcode database postprocessor, which would locate the shellcode, attempt a database match, and then fire off an alert if a match didn’t exist. Another option would be to send the shellcode to another processing engine for further inspection. This technique’s drawback is that it requires constructing and maintaining a database of known exploit shellcode. In addition, more advanced attackers might use custom-developed shellcode that isn’t publicly available.
Analyzing payload size It would be beneficial to determine if the response is trusted without having to catalog known exploit shellcode. We suggest that the attack’s payload must be of sufficient size for forging to be possible. Understanding the different methods for forging responses makes it possible to predict the minimum forging size. If the shellcode is smaller than that minimum, the analyst can trust the
1 2 3 4 5 6 7
10.1.1.55 10.1.1.10 10.1.1.55 10.1.1.55 10.1.1.10 10.1.1.55 10.1.1.10
TCP TCP TCP HTTP HTTP TCP TCP
10.1.1.10 10.1.1.55 10.1.1.10 10.1.1.10 10.1.1.55 10.1.1.10 10.1.1.55
1158 > http [SYN] Seq=0 Ack http > 1158 [SYN, ACK] Seq= 1158 > http [ACK] Seq=1 Ack Continuation HTTP/1.1 400 Bad Request 1158 > http [ACK] Seq=1304 http > 1158 [RST] Seq=91 Ack
Figure 2. Buffer overflow attack. A network capture of what appears to be a failed intrusion attempt. The server’s response is in black.
server response. This technique carries the most risk because it requires the NIDS signature developer or analyst to clearly understand the optimum forging methods. The worst-case scenario: the attacker uses servicepack-dependent shellcode. The factors affecting payload size include the code size needed to find the socket handle, size of the attacker’s backdoor, and the error message’s size. Although the smallest publicly documented16 Windows backdoor shellcode is roughly 91 bytes, it’s risky to include backdoor size in the calculations because it’s mostly an unquantifiable factor. The drawback to the payload-size approach is the complexity in determining forging requirements for various vulnerabilities. For instance, as we mentioned earlier, the error message’s size is considered only when default APIs can’t be used. Although more research is certainly needed, our initial tests indicate that in most cases a payload of at least 350 bytes is required to forge responses.
Results and analysis Each method for determining whether to trust the response has its own strengths and weaknesses. The cataloging of known exploit shellcode is perfectly suited for public exploits, which comprise the majority of attacks. As we previously described, the payload size is irrelevant because the code is mostly static, so we can account for any minor changes in the payload—changing ports or backdoor names and passwords. In addition, this method is the most straightforward and requires the least technical research. However, it fails with randomly encoded payloads. Payload-size analysis is designed for optimized payloads that are randomly encoded. With the overall trend of payload development toward small, optimized code, we expect this method to increase in effectiveness. For example, the latest release of the Metasploit Framework has an average encoded Windows payload size of roughly 250 bytes. Our analysis suggests that only 1 one of the 19 payloads would be large enough to forge responses for the exploits we tested. Contrast this to public shellcodes available at securityfocus.com and securiteam.com in which the average Windows payload size is more than 400 bytes. In the unlikely case that the payload is both unwww.computer.org/security/
IEEE SECURITY & PRIVACY
known and too large, the analyst should either reverseengineer it or check the patches and logs. If analysts fail to implement these methods correctly, their systems will be vulnerable to attackers inserting nonfunctional known shellcode in addition to the forging payloads or splitting larger payloads within or between packets. We note that all methods are likely to be computationally expensive and best reserved for postprocessing. It’s also obvious that an attacker could purposefully create large encoded payloads to render the above methods ineffective. However, this provides no real advantage because it makes the analyst try harder to determine the attack’s outcome. Ultimately, we feel that the widespread knowledge of the possibility of response forging, combined with methods to correctly determine response validity, makes forging attacks too risky. Nevertheless, it’s useful to determine the result with polymorphic overflows in which the payload size is either too large or can’t be easily calculated. Because attackers often reuse good decoders throughout many exploits, it might be possible to use the decoding/ reverse-engineering method to locate the decoder and then either use it to support analyzing the payload size or to determine the shellcode function.
he methods we discuss here must be implemented as either analyst guidance or preferably in a NIDS plug-in or similar software solution. Although we only address Windows computers, we expect Unix systems to show similar results. We’re currently testing similar attacks against Linux using the Metasploit Framework. Additionally, we’re developing payload-size and shellcode-matching filters for Snort. On the responsematching side, several real-world issues exist that need additional research—we’ve found some exploits that have several different patched responses based on the exploit vector, requiring a better matching method than simply using the flowbits plug-in, for example. Further research into these ideas should prove even more beneficial in reducing both the analyst workload and the risk from evasion attacks.
Acknowledgments We thank Matt Miller for his ideas and examples on shellcode creation and troubleshooting. Additionally, H.D. Moore’s suggestions on using existing sockets and process injection for forging were particularly beneficial. We also thank the reviewers for their insightful comments and recommendations that strengthened the article. Finally, we thank the US Air Force Research Laboratory’s Anti-Tamper Software Protection Initiative Technology Office for supporting this research. The views expressed in this paper are those of the authors and don’t reflect the official policy or position of the US Air Force, US Department of Defense, or the US government. 42
IEEE SECURITY & PRIVACY
References 1. P. Ning and S. Jajodia, “Intrusion Detection Techniques,” The Internet Encyclopedia, H. Bidgoli, ed., Wiley & Sons, 2003, pp. 2–6. 2. J. Allen et al., State of the Practice of Intrusion Detection Technologies, tech. report CMU/SEI-99-TR-028, Carnegie Mellon Univ. Software Eng. Inst., 2000, pp. 37–60; www.sei.cmu.edu/pub/documents/99.reports/pdf/99 tr028.pdf. 3. R. Sommer and V. Paxon, “Enhancing Byte-Level Network Intrusion Detection Signatures with Context,” Proc. 10th ACM Conf. Computer and Communications Security (CCS 03), ACM Press, 2003, pp. 5–6. 4. V. Paxson, “Bro: A System for Detecting Network Intruders in Real-Time,” Proc. 7th Ann. Usenix Security Symp. (Security 98), Usenix Assoc., 1998, pp. 12–15. 5. C. Kruegel and W. Robertson, “Alert Verification: Determining the Success of Intrusion Attempts,” Proc. 1st Workshop on the Detection of Intrusions and Malware & Vulnerability Assessment (DIMVA 04), 2004, pp. 1–14. 6. J. Goodall, W. Lutters, and A. Komlodi, “The Work of Intrusion Detection: Rethinking the Role of Security Analysts,” Proc. 10th Americas Conf. Information Systems (AMCIS 04), Nicolas C. Romano, ed., Assoc. for Information Systems, 2004, pp. 1421–1427. 7. T. Ptacek and T. Newsham, Insertion, Evasion, and Denialof-Service: Eluding Network Intrusion Detection, Secure Networks, Jan. 1998, pp. 11–14. 8. D. Mutz, G. Vigna, and R. Kemmerer, “An Experience Developing an IDS Stimulator for the Black-Box Testing of Network Intrusion Detection Systems,” Proc. 19th Ann. Computer Security Applications Conf. (ACSAC 03), IEEE CS Press, 2003, pp. 2–7. 9. K. Tan, J. McHugh, and K. Killourhy, “Hiding Intrusions: From the Abnormal to the Normal and Beyond,” Proc. 5th Int’l Workshop on Information Hiding, LNCS 2578, Springer-Verlag, 2002, pp. 10–16. 10. D. Wagner and D. Dean, “Intrusion Detection via Static Analysis,” Proc. 2001 IEEE Symp. Security & Privacy (SP 01), IEEE CS Press, 2001, pp. 9–11. 11. D. Wagner and P. Soto, “Mimicry Attacks on Host-Based Intrusion Detection Systems,” Proc. 9th ACM Conf. Computer Security (CCS 02), ACM Press, 2002, pp.1–4. 12. M. Miller, “Understanding Windows Shellcode,” nologin.org, Dec. 2003; www.hick.org/code/skape/ papers/win32-shellcode.pdf. 13. ADMmutate Documentation, 2001; www.ktwo.ca/ readme.html. 14. T. Detristan et al., “Polymorphic Shellcode Engine Using Spectrum Analysis,” Phrack, vol. 11, no. 61, Aug. 2003; www.trust-us.ch/phrack/show.php. 15. R. Kuster, “Three Ways to Inject Your Code into Another Process,” July 2003; www.codeproject.com/ threads/winspy.asp. 16. S.K. Chong, “History and Advances in Windows Shell-
code,” Phrack, vol. 11, no. 62, July 2004; www.trust-us. ch/phrack/show.php. David J. Chaboya is a captain in the US Air Force and is the assessment science team lead at the Air Force Research Lab’s Anti-Tamper and Software Protection Initiative Office. His research interests include intrusion detection, network traffic analysis, exploit development, and reverse engineering. Chaboya has an MS in computer engineering from the US Air Force Institute of Technology. He is a member of the IEEE and the US Armed Forces Communications and Electronics Association (AFCEA). Contact him at [email protected]
Richard A. Raines is an associate professor of electrical engineering in the department of electrical and computer engineering at the US Air Force Institute of Technology, Wright-Patterson AFB, Ohio. His research interests include computer communication networks, global communication systems, intrusion detection systems, and software protection. Raines has a PhD in electrical engineering from Virginia Polytechnic Institute and State University. He served 21 years in the US Air Force and Army. He is a member of Eta Kappa Nu and a senior member of the IEEE. Contact him at [email protected]
PURPOSE The IEEE Computer Society is the world’s largest association of computing professionals, and is the leading provider of technical information in the field. MEMBERSHIP Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEB SITE The IEEE Computer Society’s Web site, at www.computer.org, offers information and samples from the society’s publications and conferences, as well as a broad range of information about technical committees, standards, student activities, and more. BOARD OF GOVERNORS Term Expiring 2006: Mark Christensen, Alan Clements, Robert Colwell, Annie Combelles, Ann Q. Gates, Rohit Kapur, Bill N. Schilit Term Expiring 2007: Jean M. Bacon, George V. Cybenko, Antonio Doria, Richard A. Kemmerer, Itaru Mimura,Brian M. O’Connell, Christina M. Schober Term Expiring 2008: Richard H. Eckhouse, James D. Isaak, James W. Moore, Gary McGraw, Robert H. Sloan, Makoto Takizawa, Stephanie M. White Next Board Meeting: 9 Feb. 2007, Vancouver, B.C.
IEEE OFFICERS President : MICHAEL R. LIGHTNER President-Elect: LEAH H. JAMIESON Past President: W. CLEON ANDERSON Executive Director: JEFFRY W. RAYNES Secretary: J. ROBERTO DE MARCA Treasurer: JOSEPH V. LILLIE VP, Educational Activities: MOSHE KAM VP, Pub. Services & Products: SAIFUR RAHMAN VP, Regional Activities: PEDRO RAY President, Standards Assoc: DONALD N. HEIRMAN VP, Technical Activities: CELIA DESMOND IEEE Division V Director: OSCAR N. GARCIA IEEE Division VIII Director: STEPHEN L. DIAMOND President, IEEE-USA: RALPH W. WYNDRUM, JR.
Rusty O. Baldwin is an associate professor of computer engineering in the department of electrical and computer engineering at the Air Force Institute of Technology, Wright-Patterson AFB, Ohio. His research interests include computer communication networks, embedded and wireless networking, information assurance, and reconfigurable computing systems. Baldwin has a PhD in electrical engineering from Virginia Polytechnic Institute and State University. He served 23 years in the US Air Force. He is a member of Eta Kappa Nu, and a senior member of the IEEE. Contact him at [email protected]
Barry E. Mullins is an assistant professor of computer engineering in the department of electrical and computer engineering at the US Air Force Institute of Technology, Wright-Patterson AFB, Ohio. His research interests include computer communication networks, embedded and wireless networking, information assurance, and reconfigurable computing systems. Mullins has a PhD in electrical engineering from Virginia Polytechnic Institute and State University. He served 21 years in the US Air Force, teaching at the US Air Force Academy for seven of those years. He is a member of Eta Kappa Nu, Tau Beta Pi, and the American Society for Engineering Education (ASEE), as well as a senior member of the IEEE. Contact him at [email protected]
EXECUTIVE COMMITTEE President:
DEBORAH M. COOPER* PO Box 8822 Reston, VA 20195 Phone: +1 703 716 1164 Fax: +1 703 716 1159 [email protected]
COMPUTER SOCIETY OFFICES Washington Office 1730 Massachusetts Ave. NW Washington, DC 20036-1992 Phone: +1 202 371 0101 Fax: +1 202 728 9614 E-mail: [email protected]
Los Alamitos Office 10662 Los Vaqueros Cir., PO Box 3014 Los Alamitos, CA 90720-1314 Phone:+1 714 821 8380 E-mail: [email protected]
Membership and Publication Orders: Phone: +1 800 272 6657 Fax: +1 714 821 4641 E-mail: [email protected]
Asia/Pacific Office Watanabe Building 1-4-2 Minami-Aoyama,Minato-ku Tokyo107-0062, Japan Phone: +81 3 3408 3118 Fax: +81 3 3408 3553 E-mail: [email protected]
President-Elect: MICHAEL R. WILLIAMS* Past President: GERALD L. ENGEL* VP, Conferences and Tutorials: RANGACHAR KASTURI (1ST VP)* VP, Standards Activities: SUSAN K. (KATHY) LAND (2ND VP)* VP, Chapters Activities: CHRISTINA M. SCHOBER* VP, Educational Activities: MURALI VARANASI† VP, Electronic Products and Services: SOREL REISMAN† VP, Publications: JON G. ROKNE† VP, Technical Activities: STEPHANIE M. WHITE* Secretary: ANN Q. GATES* Treasurer: STEPHEN B. SEIDMAN† 2006–2007 IEEE Division V Director: OSCAR N. GARCIA† 2005–2006 IEEE Division VIII Director: STEPHEN L. DIAMOND† 2006 IEEE Division VIII Director-Elect: THOMAS W. WILLIAMS† Computer Editor in Chief: DORIS L. CARVER† Executive Director: DAVID W. HENNAGE† * voting member of the Board of Governors † nonvoting member of the Board of Governors
Executive Director: DAVID W. HENNAGE Assoc. Executive Director: ANNE MARIE KELLY Publisher: ANGELA BURGESS Associate Publisher: DICK PRICE Director, Administration: VIOLET S. DOAN Director, Business & Product Development: PETER TURNER Director, Finance and Accounting: JOHN MILLER rev. 10 Oct. 06
IEEE SECURITY & PRIVACY