Author Guidelines for 8

2 downloads 0 Views 545KB Size Report
JAPET: A Java Based Tool for Performance Evaluation of Software Systems. D Evangelin Geetha ... software performance engineering process for object- oriented systems. .... Bruno Müller-Clostermann, “QUEST User Manual”,. Version 1.3, June ... [12]Vibhu Saujanya Sharma, Pankaj Jalote, Kishor S. Trivedi,. “Evaluating ...
JAPET: A Java Based Tool for Performance Evaluation of Software Systems D Evangelin Geetha M S Ramaiah Institute of Technology Bangalore – 560054, India [email protected]

Ch Ram Mohan Reddy BMS College of Engineering Bangalore – 560 019, India [email protected]

Abstract

Performance Engineering is vital in developing software systems effectively. System performance validation provides assurance that a system as a whole is likely to meet its quantitative performance goals. It applies performance engineering methods and tools during a system’s development and has to be integrated into the design process from the very beginning through system deployment. In this paper, we describe the features and use of a prototype tool, JAPET (JAva based Performance Evaluation Tool). The tool supports software performance engineering process for objectoriented systems. The use of tool is illustrated with a case study of e-commerce systems.

1. Introduction Software Performance Engineering (SPE) is a technique to assess the performance of software systems early in the life cycle [2], [3]. SPE continues through all the phases of software development life cycle and to monitor, report actual performance against specifications and predictions. Hence, SPE is important for software engineering and in particular for software quality. Performance problems may be so severe that they require extensive changes to the system architecture. If these changes are made late in the development process, they can increase development costs, delay deployment, or adversely affect other desirable qualities of a design. SPE approach provides better performance than can be achieved using a ‘fix-it-later’ approach [1]. It is important to provide support for early assessment of the performance characteristics of distributed object-oriented systems since its functionality is decentralized. The need for automation for these systems is undeniable. Many processes, when automated become cost-effective by consuming less time and effort, and consequently, less money. If Software Performance models were to be generated automatically, software designers would not find it hard to employ them in their software development cycles, thus bridging the gap

T V Suresh Kumar K Rajani Kanth M S Ramaiah Institute of Technology Bangalore – 560054, India [email protected] [email protected]

between software development performance analysis domains.

and

software

2. Related work Software Performance Engineering has evolved over the years and has been demonstrated to be effective during the development of many large systems [2], [3]. The extensions to SPE process and its associated models for assessing distributed object-systems are discussed in [5]. Predictive performance modeling environment that enables performance measures for distributed objectoriented systems is described in [8], [10]. [4] Describes the use of SPE-ED, a performance-modeling tool that supports SPE process, for early lifecycle performance evaluation of object-oriented systems. Some researchers [9] are working on QUEST, a performance tool that integrates performance evaluation with the SDL method. A group of authors recently contributed one more automated tool HIT, for model-based performance evaluations of computing systems during all phases of their life cycle [7]. Prof. Trivedi [12] has proposed an approach for performance evaluation of software systems following the layered architecture. His approach initially models the system as a Discrete Time Markov Chain, and extracts parameters for constructing a closed Product Form Queuing Network model solved using the SHARPE software package. In this paper, we discuss the features and use of a prototype tool, JAPET that solves the software execution model to obtain the response time and system execution model to obtain different performance metrics.

3. JAPET overview This section gives the overview of the features of the tool, JAPET that makes it appropriate for objectoriented systems.

3.1. Focus

The tool focuses on evaluation of software performance. There are two models addressing the system, namely software execution model and system execution model. Users create software execution model, using system requirements by exploiting the features of UML and provide performance specifications [6]. System execution models are created automatically using software execution models [2]. A combination of analytic and simulation model solutions identify potential performance problems and software processing steps that may cause the problems. JAPET facilitates the creation of simple models of software processing with the goal of using the simplest possible model that identifies problems with the software architecture, design, or implementation plans.

3.2. Model description Analytically we solve the software execution model to obtain performance metrics like response time, throughput, utilization time etc. For this software execution model, we propose various parameters for each node of the model. The parameters may be software resource requirements and hardware resource requirements. These parameters are the input to the tool JAPET. The software resources may be the number of messages transmitted, number of SQL queries, and number of SQL updates etc. depending on the type of system to be studied and the key performance drivers for that system. A performance specialist provides overhead specifications that specify an estimate of the computer resource requirement for each software resource request. These are specified once and reused for all software analysis that executes in that environment.

3.3 Model solution JAPET produces analytic results for the software models, and an approximate, analytic solution of the generated system execution model. The users can enter various software resource data for each software component and hardware resource data (hardware configuration) on which the software module is to be deployed. The results are reported by the tool for the parameters, total hardware resource requirement for the scenario, response time, etc..

3.4 Model results The results reported by JAPET are the end-to-end response time, the device utilization, throughput, queue length and the amount of time spent at each computer device for each processing step. This identifies the potential computer device bottlenecks also.

Model results are presented both with numeric values and with graphs. This lets users view any combination of performance metrics, and even compare performance metrics for design or implementation choices.

4. Case study To illustrate the use of JAPET for modeling and evaluating the performance of object-oriented systems, we present a case study based on e-commerce applications.

4.1 Description The parking space provider offers parking space available for reservation. This information is registered in the PSOS (Parking Space Optimization Service) database. Users are able to access the PSOS via Internet for obtaining parking information or for making a reservation request. The reservation request is registered in the PSOS database. The PSOS sends the booking information and access code to the end user subject to the acceptance of the reservation request by the Parking Space Provider [11].

4.2 Performance evaluation with JAPET After identifying the scenarios and their processing steps in the Sequence Diagram, the analyst generates software execution model, i.e. generating software execution graph from UML models as mentioned in [6]. The next step is to specify software resource requirements for each processing step. The software resource requirements we examine for this case study are: input, server database access, local data base access, page size, and data size. Hardware resources are CPU, Disk, Delay, Internet and LAN. Then the analyst uses JAPET to evaluate the execution graph model. Figure 1 shows the JAPET screen. The tool is menu driven. Figure 1 shows various node options that can be selected. After selecting, depending upon the node type, it will take the values for the remaining input parameters:  For the base node, it takes number of Computer Resources (K), Software Requirements (M), amount of Computer Resource required for each of the software resource (w(I,J) ), amount of Software Resource requirement (a(j))  For the nodes other than base node, it takes some additional parameters.  For sequential node, it takes no. of nodes in the sequence and the corresponding software resource requirement  For repetition node, it takes no. of repetitions

Figure 1. Screen of JAPET

Figure 2. Input screen for case node

Figure 3. Output screen (Total computer resource requirement)

Figure 4. Output screen (Total response time) For case node, it takes no. of alternatives, no. of nodes in each alternative, probability of the occurrence of each alternative and the corresponding software resource requirement. The output values, computer resource requirement for each software resource requirement, total software resource requirement and total computer resource requirement for the scenario are displayed as shown in Figure 4. 

Figure 5. Performance metrics for each device

4.3 System execution model The System execution model solution is obtained by solving queuing network model. Different performance parameters for each hardware resource and system parameters are obtained as given in figures 5 and 6. The reports are generated in the form of graphs as given in Figure 7.

Figure 6. Performance metrics for the system

Figure 7. Graphs – Performance metrics

5. Simulation results For our discussion, we consider two software architectures, namely architecture I and architecture II as given in Figure 8 and Figure 9 respectively. For discussion purpose, we have considered the governing performance metrics as arrival rate, system response time and server processing time from the graphs Figure 10, Figure 11 and Figure 12. It is observed that Response time is low, whenever the server processing time and arrival rates are low in case of Architecture II, but the response time is relatively more for the Architecture I, when the parameters for server processing and arrival rate are low. It is interesting to note that whenever the server processing time, arrival rate increases the response time of both architecture I and architecture II are approximately same. Hence, we can conclude that at higher server processing rates both Architecture I and Architecture II behaves in a similar fashion. At the same time at the lower levels of governing parameters, Architecture II is preferable. However, the configuration of the systems play dominant role in different distributed applications. This point we have not addressed in our paper. Hence, architecture II that shows lower response time is preferred in our case study.

A p p lic a tio n

C lie n t In t e rn e t

LA N

PSOS D a t ab a s e

Figure 8.

Architecture I

Client

Server Internet

Application PSOS Database

Figure 9.

Architecture II

7. References [1] Connie U. Smith, “Software Performance Engineering: A Case Study including Performance Comparison with Design Alternatives”, IEEE Transactions on Software Engineering Vol. 19(7) July 1993.

[2] Connie U. Smith, Performance Engineering of Software Systems, Reading, MA, Addison-Wesley, 1990.

[3] Connie U. Smith and Lioyd G. Williams, Performance Solutions, 2000.

Figure 10. Simulation result for architecture I

[4] Connie U. Smith and Lioyd G. Williams, “Performance

Engineering Evaluation of Object Oriented Systems with SPE-ED”, in LNCS (Springer Verlag 1997), 1245, pp. 135153.

[5] Connie U. Smith and Lioyd G. Williams, “Performance

Engineering Models of CORBA-based distributed-object systems”, Performance Engineering Services and Software Engineering Research, 1998.

[6] Evangelin Geetha D, Suresh Kumar T V and Rajani Kanth

K, “Early Performance Modeling for Multi Agent Systems using UML 2.0”, IJCSNS International Journal of Computer Science and Network Security, March 2006, Vol.6, No.3A, pp.247-254.

Figure 11. Simulation result for architecture II

[7] “HIT and HI-SLANG, An Introduction”, Version 3.1.000. [8] Kahkipuro

Server Processing Time vs Response Time 30

P, “UML-Based Performance Modeling Framework for Component-Based Distributed Systems”, in R.Dumke et al. (Eds): Performance Engineering, LNCS 2047, Springer, pp.167-184, 2001.

Response Time

25 20

Archit ect ure I

15

Archit ect ure II

10 5

[9] Marc Diefenbruch, Jörg Hintelmann, Axel Hirche, and

Bruno Müller-Clostermann, “QUEST User Manual”, Version 1.3, June 1999.

[10] Peter Utton and Gino Martin, David Akehurst and Gill

0 0.0005

0.005

0.025

0.05

0.1

Server Processing Tim e

Figure 12. Server processing time vs response time for architecture I and architecture II

6. Conclusion We have discussed the features of a prototype tool, JAPET and its application is illustrated with a case study of e-parking application. The results are obtained, different performance metrics are compared, and the reports are provided in the form of graphs. The future development is automating the generation of execution graphs from UML diagrams.

Waters, “Performance Analysis of Object-oriented Designs for Distributed Systems”, Technical Report, University of Kent at Canterburry.

[11] Thomas B. Hodel-Widmer, Suo Cong University of Zurich, “Parking Space Optimization Service”, Swiss Transport Research Conference, March 24-25, 2004.

[12] Vibhu Saujanya Sharma, Pankaj Jalote, Kishor S. Trivedi,

“Evaluating Performance Attributes of Layered Software Architecture”, CBSE 2005, LNCS 3489, Springer-Verlag, 2005, pp. 66-81.