Implementing Digital Forensic Readiness for Cloud ... - Unesco

8 downloads 151223 Views 597KB Size Report
cloud computing ads to the complexity of collecting digital forensic data after a malicious incident occurred. The addition .... Cloud data protection for the masses .
Implementing  Digital  Forensic  Readiness  for  Cloud  Computing  Using   Performance  Monitoring  Tools       F.R.  Van  Staden  and    H.S.  Venter   University  of  Pretoria      

Abstract   Cloud computing is a scalable, distributed environment that exists within the confines of the internet and is used to deliver digital services to users. The problem is that the distributed nature of cloud computing makes it difficult to collect digital forensic data after a malicious incident occurred. The article discusses using performance monitoring tools to implement digital forensic readiness for cloud computing. A learning management system is used as an example of a cloud computing implementation during the discussion.

Author   F.R. van Staden received his undergraduate degree in 2005 from the University of Pretoria and is presently studying towards his Master of Science degree at the same institution. His research interests include digital forensics, digital forensic readiness and trusted systems.

          1.  Introduction   Digital forensic specialists are plagued with sifting through large data sets to find incident information. As part of the collection process, system log files are collected and scanned for data about the incident. The problem is that, due to storage constraints on live systems, most log files are rotated (i.e. old data is overwritten with new data) in a bid to save space. The process of log file rotation can cause the data, pertaining to a possible incident, to be lost. Therefore, when an incident is detected, the production systems involved need to be stopped until the data collection process is complete. Cloud computing is a scalable, distributed environment, which exists within the confines of the internet and is used to deliver digital services to users. Services might be hosted in the same physical location or the same service might be hosted in multiple physical locations. The distributed nature of cloud computing ads to the complexity of collecting digital forensic data after a malicious incident occurred. The addition of digital forensic readiness to cloud computing allows for the collection of live digital forensic data while users are accessing services.

Proper application management includes application performance monitoring. The only way to ensure that users are experiencing services as is intended by the service provider is to monitor the performance of the service. Users may assume that, since service providers guarantee the quality of service that is provided to users, performance monitoring tools are already implemented. Can a performance monitoring tool be used to implement digital forensic readiness to enhance digital forensic investigations surrounding cloud computing implementations? Using a performance monitoring tool to collect log file data means that no time is spent, during the digital forensic investigation, to collect the log file data. Collecting live data using a performance monitoring tool means that downtime is not required for data collection purposes. Collecting live data using a performance monitoring tool implies that the collected data must be validated and stored in a read-only data environment. This article discusses using performance monitoring tools to implement digital forensic readiness for a learning management system. Learning management systems (LMS) provide services supporting elearning activities such as communication, document management, assessment and content management tools. Using a web browser, users can access LMS services from anywhere. The LMS could be implemented as a cloud computing service. The rest of the article is structured as follows: Section 2 provides the background for this article; Section 3 discusses where the data is collected from and how the data is collected in a digital forensically sound manner. Section 4 discusses the basic investigations that were performed with the data. The article closes with the conclusion that it is possible to use a performance monitoring tool to collect digital forensic data from cloud computing implementations.

2.  Background   In this section an overview is given of performance monitoring tools, digital forensics, digital forensic readiness, cloud computing and learning management systems. Performance monitoring tools are discussed to show the similarities that exist between the various data monitoring tools and to describe the components that are relied upon to implement the proposed solution. The digital forensics section discusses digital forensics and a digital forensic process model. Digital forensic readiness is discussed to show what is needed to make data ready to be used in a digital forensic investigation. A short overview of cloud computing and learning management systems is given to set the background for the environment that was used in the experiment.

2.1  Performance  monitoring  tools   Performance monitoring tools are designed to collect data about software systems to report on performance, uptime and availability. Live data about the monitored systems is used to detect system problems and pinpoint the source of the problem. Some performance monitoring tools make use of the Simple Network Monitoring Protocol (SNMP) that is specified in RFC55901. SNMP specifies a method for connecting to and collecting data from servers, firewalls and network devices, mainly for the purpose of performance monitoring.

1

Harrington, D and Shoenwaelder, J. Transport Subsystem for the Simple Network Management Protocol (SNMP). RFC5590. s.l. : IETF, 2009.

2

Data used by performance monitoring tools are categorised into live data, historical data and custom data. Live data is data that was collected during the latest completed collection cycle and is used to display the current system state. Live data can be kept for a period of time to show the changes in system state over time. After a set period of time the live data is reformatted and moved to the historical data set. Historical data is used to calculate the performance, uptime and availability of systems. Custom data is data collected by the performance monitoring system but not used for displaying system state or reformatted to become historical data. Performance monitoring systems normally do not understand the meaning of custom data. Custom data is stored in read-only data tables to be used by custom reports. Descriptors can be created for custom data, to explain the meaning of the data to the performance monitoring tool, but then the data is not seen as custom anymore. The performance monitoring tool mentioned in this paper uses SSH to establish communication between the probes and the performance monitoring server. According to RFC42512, Secure Shell (SSH) is used to implement secure communication over an insecure network. The implication is that the integrity of the data sent to the performance monitor by the probes, is assured. Data integrity must be assured if the data is to be used as part of a digital forensic investigation. Digital forensics and digital forensic readiness is discussed in the following section.

2.2  Digital  forensics  and  digital  forensic  readiness   Digital forensic science is a relatively new field of study that evolved from forensic science to allow crime scene investigation of digital crime scenes. According to the Oxford Dictionary3, digital forensic science is the systematic gathering of information about electronic devices that can be used in a court of law. Digital forensic science is more popularly called digital forensics and sometimes also called computer forensics. Palmer4 defines digital forensics as “the use of scientifically derived proven methods towards the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitation or furthering the reconstruction of events”. Palmer’s definition describes the digital forensic process whereas Oxford describes digital forensic science. The Digital Forensic Process Model (DFPM) by Kohn, et al.5, states that; “any digital forensic process must have an outcome that is acceptable by law”. Rowlingson6 defines digital forensic readiness as consisting of two objectives. The first objective is to maximise the environment’s capability of collecting digital forensic information and the second objective is to minimize the cost of a forensic investigation. Preparing any environment to be digital forensically ready, a mechanism will need to be added to preserve, collect and validate the information contained in the environment. The information gathered from the environment can then be used as part of

2

Ylonen, T. The Secure Shell (SSH) Protocol Architecture. RFC4251. s.l. : Network working group, 2006. RFC4251. 3 Oxford. AskOxford.com. AskOxford.com. s.l. : Oxford University Press, 2010. 4 Palmer, G.L. Road Map for Digital Forensic Research. Road Map for Digital Forensic Research. [Electronic Publication]. s.l. : Digital Forensic Research Workshop (DFRWS), 2002. 5 Kohn, Michael, Eloff, J.H.P. and Olivier, M.S. UML Modelling of Digital Forensic Process Models (DFPMs). [Document] Pretoria : Information and Computer Security Architectures (ICSA) Research Group University of Pretoria, 2009. 6 Rowlingson, R. A Ten Step Process for Forensic Readiness. International Journal of Digital Evidence. 2004. Vol. II, 3.

3

a digital forensic investigation. Cloud computing architecture and implementation models are discussed in the following section.

2.3  Cloud  computing   Cloud computing is the new buzz word in digital service delivery. According to Kaufman 7 cloud computing is the ability to utilize scalable, distributed computing environments within the confines of the internet. Spring8 explains that cloud computing has three distinct service models namely; Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Cloud computing can then be defined as a group name for digital services that are made available over the internet to users. The architectural layers of cloud computing is described by Spring7 as being Facility, Network, Hardware, OS, Middleware, Application and User. Each cloud computing service model defines at which layer the user ownership ends and the service provider ownership begins. The service provider always controls the facility, network and hardware of the cloud computing system. When implementing the IaaS model the user owns the Middleware and Application architectural layers and could opt to control the OS layer too. Implementing the PaaS model the user owns the Application architectural layer and could opt to control the Middleware layer. In a SaaS model implementation the service provider owns every architecture layer except for the User layer. The user is allowed to make use of a service but has no ownership or control of the service. According to Spring9, the different architectural layers should be monitored for performance and to detect security breaches. The responsibility for monitoring each architectural layer is dependent on the cloud computing model that is implemented and should be clearly defined in the SLA an organization enters into with a service provider. Part of the SLA should also stipulate what type of reports will be made available to the organisation and at what time. Organisations must be able to trust service providers with the organisational data that is stored and processed in the cloud. Data protection remains the responsibility of the organization. Song10 suggested implementing Data Protection as a Service (DPaaS) in combination with any other cloud computing model. DPaaS makes use of different security strategies to ensure the security of data while still enabling rapid development and maintenance. Accountability for data processing is provided, with DPaaS, by implementing logging and auditing functions. Learning management systems are discussed in the following section.

2.4  Learning  Management  System   Learning management systems are used to manage user learning activities using a web interface. According to McGill11 a Learning Management System(LMS) is used to support e-learning activities by processing, storing and disseminating educational materials and supporting administration and 7

Kaufman, Lori M. Data Security in the world of cloud computing. IEEE Computer Society, Reliability Society. July - August 2009, pp. 61 -64. 8 Spring, Jonathan. Monitoring Cloud Computing by Layer, Part 1. IEEE Computer Society, Reliability Society. March - April 2011, pp. 66 - 68. 9 Spring, Jonathan. Monitoring Cloud Computing by Layer, Part 2. IEEE Computer, Reliability Societies. May June 2011, pp. 52 - 55. 10 Song, Dawn, et al. Cloud data protection for the masses. IEEE Computer Society. January 2012, pp. 39 - 45. 11 McGill, Tabya J and Klobas, Jane E. A task-technology fit view of learning management system impact. 2009. Vol. 52.

4

communication associated with teaching and learning. The LMS manages course information, online assessments, online assignments, course grades and course communications. Access to the LMS is gained by use of a web browser and an internet connection. Users of an LMS are students and lecturers that have courses hosted on the LMS. An LMS is a collection of services that are accessed by users who take part in e-learning activities. Access to the different services offered by the LMS is controlled on course level. Lecturers can decide what services their students can make use of to support the students learning activities. The following section discusses the implementation of the LMS as a cloud computing service and how the LMS is monitored.

3.  Implementation  of  the  Learning  Management  System   LMS environments need a lot of computing resources and high system availability to provide efficient and effective services to users. Ensuring quality of service is made possible by virtualising the application server installations and combining multiple virtual servers to make up the LMS environment. The virtual server manager implements high availability by distributing virtual servers of the same type over different physical servers in different data centres. Virtualising the LMS environment in such a way is equal to implementing the LMS as a cloud computing service. Implementing a LMS in the Software as a Service(SaaS) cloud computing model, the LMS is hosted outside the organization. The organisation owns the data that is contained inside the LMS but has no other control over the LMS. Performance monitoring should be implemented by the service provider to prove that service level agreements are met. Security monitoring and digital forensic readiness surrounding the LMS should also be included in the Service Level Agreement between the organisation and the service provider. User activity data is normally supplied by an internal LMS process. Since the organisation does not administrate the service, the user activity must be requested from the service provider. The LMS could also be implemented using the cloud computing service model, Platform, as a Service (PaaS). The organisation cannot implement a performance monitoring tool in the PaaS model, since the service provider has ownership of the OS architectural layer. Performance and security monitoring must still be specified in the service level agreement. The organisation has ownership of the application and can access user activity data supplied by the application. The decision was made to implement the LMS using the IaaS model and the organisation has opted to control the OS architectural layer. As part of the operational environment of the LMS, the organisation implemented a performance monitoring tool to collect data about the performance of the LMS environment. The performance data is used to ensure that users experience a good quality service. Performance data is also used to scale the LMS environment according to user utilisation figures. Figure 1shows an example implementation of an LMS environment.

5

Figure1. LMS Layout with Probes The LMS architecture is monitored by the performance monitoring system comprising of a monitoring server and a set of probes that were installed on each component in the LMS environment. Different types of monitoring probes exist, to monitor different data collection points like system performance, database tables, log files and selected system files. Probes do not normally perform any processing but connect to a data point and periodically reads the value of the data point. A data point can be a system value, also called a standard data point or it could be a software generated data point, also called an extended data point. Standard data points like CPU, memory usage and drive space usage exist as part of the operating system. Extended data points are small applications that can be created to capture and expose data from log files and databases. Communication between the monitoring probe and the performance monitoring server is established periodically, according the timed events as set by the system administrator in the performance monitoring server. If, for example, the timed event is set to five minutes, then the performance monitoring tool will communicate with the probes every five minutes. Communication between the probe and the monitoring server is achieved using SSH or HTTPS, depending on the implementation of the monitoring tool, which means that no tampering can occur during transmission of the data and the origin of the data can be validated. Implementing digital forensic readiness required the creation of extension points to read data from log files on all the LMS components and the central database. The extension points were defined in the performance monitoring server database using data extensions provided by the performance monitoring system. Data read from the extension points is stored in a read only table on the performance monitoring server database. A read-only database table only allows the data to be stored using SQL INSERT statement and the data to be read, using the SQL SELECT statement. The read-only table does not allow

6

the data to be edited using an SQL UPDATE statement or deleted using an SQL DELETE statement. The following sections discuss the function and monitoring points of the different components, as depicted in Figure 1, starting with the load balancer.

3.1  The  load  balancer   Users access LMS services through a load balancer device. A load balancer device does exactly what the name implies, i.e. to distribute user session load over server resources in a balanced way. Servers that are placed in a load balanced group are also placed in a virtual network. Any application server can freely communicate with any other application server in the same virtual network but can only communicate with servers outside the virtual network by using the load balancer as a gateway device. The load balancer can also act as a firewall by blocking traffic to certain ports, and as a reverse proxy, by hiding the architecture of the load balanced servers, from users. SSL offloading is implemented on the load balancer to enforce https between the user's browser and the LMS. Figure 1 depicts probe A being connected to the load balancer device. Probe A was extended to read data from the user session, specifically the origin address and the session_id. Since SSL offloading is implemented on the load balancer, the origin address of the user is used to verify the user's session and create a session cookie. The origin address and session_id is stored in load balancer read-only table, called usersession_from_LB, on the performance monitoring system database with a time stamp of when the session was created, as depicted in Figure 2.

Figure 2. Load balancer database table When the initial connection is made, the originating address is used to create the session_id in the session cookie. All of the connections made to the load balancer from the same origin areas signed the same session cookie until the session times out. The session cookie contains information that allows the load balancer to route user requests to the same application node every time a user request is received during the life-time of the session. The Load balancer and the LMS share the session_id to ensure that the same user session is assigned to the application node. The following section discusses the application nodes.

3.2  The  application  nodes   Although Figure 1 shows all the application nodes in one location, this might not be true. As stated before, the virtual application nodes might reside on different physical servers in different data centres. The application nodes should be setup to work independently meaning that the application nodes do not know of each other but can perform the same functions. Since the LMS is web based, each virtual application node has a web server and the same collection of web applications.

7

Figure 1 shows probe D to probe H connected to the different virtual application nodes. Probes D to H were extended to collect data from the virtual application node's log files. Data can be collected from several log files that exist on each virtual application node. The data that was selected to be collected was data that pertains to pages that a user visits within the LMS. This data was stored on the performance monitor database in a read-only table called user_session_from_AN, with a time stamp for each record. Figure 3 depicts the user_session_from_AN table.

Figure 3. Table used to store data from application nodes Each application node should still have access to the same application data, therefore a central database and a central content store is used by all the virtual application nodes. The central database is discussed in the following section.

3.3  The  central  database   The central data store not only stores application data but also user session information. User session information is kept on the database as long as the user session is active, to ensure that a user session can be recreated in the event that one of the application nodes fail. When the user session expires the data is marked to be removed from the database by the database garbage collection process. User activity is also stored on the database in user tracer tables which is only removed when a user is deleted from the user table. User activity data is used by the LMS to profile user activity and equate the user activity with a user's academic performance. Probe C in Figure 1 was extended to collect user session data and user tracing data and save it to read-only tables, as depicted in Figure 4, on the performance monitor database.

Figure 4. Tables used to store the data obtained from the central database User session data is extracted daily, before the garbage collector clears the data, by querying the user session table, and stored in the user_session_from_DB table. The data is stored on the performance monitor database to ensure that the data is not lost and a link between the username and the user's originating address can be kept. Tracer data is extracted periodically by querying the user tracer tables

8

and stored in the user_activity_from the_DB table. The central content store is discussed in the next section.

3.4  The  content  store   LMS content, like document files or movie files are stored in the central content store. Access to LMS content is controlled by the roles that are defined in the LMS. An example would be that, if content is allocated to a specific course in the LMS, only users enrolled in that course will be able to access the content. Actions initiated for content items are create, allocate, access, de-allocate and delete, are all stored in the content log file. Probe B was extended to collect the action data from the content log file. To safeguard against the accidental deletion of content items, a process was implemented on the content store that archives deleted content. The archive process creates a log file entry in the archive log. Probe B was extended to store the archive log entry. Figure 5 depicts the user_activity_from_CS table, that is used to store data from the content log file, and the archive_action_status_CS table, that is used to store the data from the archive log file.

Figure 5. Tables used to store data from central content store log files Section 4 gives an overview of the data collected and the way the data could be used.

4.  Collecting  Digital  Forensic  Data  from  the  LMS   This section discusses the data that was collected from the different probes and how the data was used to support different investigations. Firstly, we should prove that the data could be used for a digital forensic investigation. To support a digital forensic investigation the data must be collected, preserved and validated, using scientifically proven methods. The data was collected using performance monitoring probes that were extended to collect the data from system sources that would normally be used in an investigation. Communication between the probes and the monitoring server was achieved using SSH, which means that no tampering could occur during transmission of the data and the origin of the data can be validated. Preservation of the collected data was implemented by extending the performance monitoring system’s database with extra read-only database tables. As stated before, the tables allow for the addition of more data and the reading of the stored data. Read-only database tables do not allow stored data to be changed or deleted. The process of storing the data makes use of xml files to specify the data mapping between the data the probes sent and to specify where in the database the data is stored. Three experimental investigations were constructed to show that the data could be used to support an

9

investigation. The following sections discuss the experimental investigations that were attempted and the outcome of the investigations.

4.1  Tracing  the  origin  of  a  user   The first question posed was, using the data we collected, could we determine the origin of a specific user? A database view called user_origin was created using the tables user_session_from_LB and user_session_from_DB. Querying the view with a username produced a list with all the sessions and the origins that were logged for the user. Since the user could have multiple sessions that showed different origins over time, the list was sorted according to the stored time stamps. To prove that the data was correct, a test was done using ten different users over a period of one month. The test users recorded the times that they accessed the system and the address of the connecting machines. From the comparison of the data logged by the users and the data stored on the database it was proven that the data in the list was accurate. This proved that we could, using the data collected, determine the origin of a specific user as long as we also knew the time that the user accessed the system. An added result of analysing the list was that we detected times where the same user accessed the LMS at the same time but from different origins. Users that were identified as accessing the LMS from different origins at the same time were forced to change their system passwords. The following section discusses using the data collected to reconstruct user activity.

4.2  Reconstructing  user  activity   Another question was; using the data we collected, could we reconstruct a user's activity in the system over a period of time? A database view called user_activity was created using the tables user_session_from_DB, user_activity_from_DB and user_session_from_AN. Querying the view with a specific username a list was generated that showed all the actions that a user performed during the sessions a user was logged in. Since users could performed different actions in different sessions over time, the time stamp and session_id was user to sort the list. The same ten users from the previous section were used to prove that the data in the list was correct. The users were asked to record their activity on the LMS over a month. Correlating the activity recorded by the users and the data in the list it was proven that the data in the list was correct. Using the data in the list it was possible to reconstruct a user’s activity on the LMS for a specific time period. The following section discusses using the data collected to determine when content was deleted and by whom it was deleted.

4.3  Determining  when  and  by  whom  a  content  item  was  changed   And finally, the question was posed that, using the data we collected, could we determine when and by whom a content item was changed. A view called user_content_activity was created using the tables user_session_from_DB, user_activity_from_DB, user_session_from_AN and user_activity_from_CS. Data that did not pertain to the content store activity was excluded from the view. Querying the view, using the name of a specific content item, and sorting the result using the time-stamp, produced a list of user actions performed on the content item through time. The list was further reduced by excluding access events. To prove that the data collected was correct a set of content items were created, randomly changed and then all the content items were deleted by different users. The users that took part in the testing were 10

asked to record the actions that they performed in the content items and the time that they performed them. Correlating the data the users collected and the data in the list proved that the data in the list was correct. Using the final list, it was possible to determine when and by whom a content item was created, changed or deleted.

5.  Conclusion   The paper posed the question; can a performance monitoring tool be used to implement digital forensic readiness in cloud computing in order to enhance digital forensic investigations? A performance monitoring tool extended to collect data from a Learning Management System. The Learning Management System was used as an example of a cloud computing implementation. Experimental digital forensic investigations were performed with the data that was collected by the performance monitoring tool. It was proven that data can be collected, preserved and validated using a performance monitoring tool. The data was collected from the live system in real-time which meant that no data was lost due to log file rotations or database garbage collation processes. Investigations could be performed without any costly downtime on the LMS since the data was preserved on a different system to the LMS. Investigators need not collect data by sifting through log files on the LMS which also speeds up the investigation process. Experimental investigations were successfully performed using the data collected which proved that the data could be used for investigations. Investigations were completed by running simple queries on the performance monitoring database tables. Therefore it was proven that a performance monitoring tool could be used to implement digital forensic readiness in cloud computing in order to enhance digital forensic investigations.

6.  Future  Work   After collecting data from the probes for a time it was found that the data storage requirement for implementing digital forensic readiness might at some point in the future become too expensive to be sustainable. Future work would include implementing automated processes to detect possible incidents and only collecting data for those possible incidents to reduce the amount of data collected.

Acknowledgment   This work is based on research supported by the National Research Foundation of South Africa (NRF) as part of a SA/Germany Research cooperation programme. Any opinion, findings and conclusions or recommendations expressed in this material are those of the author(s) and therefore the NRF does not accept any liability in regard thereto.

   

11