Performance Modeling Formalisms - SDQ Webserver

7 downloads 22745 Views 1MB Size Report
Software performance evaluation in the context of Model-Driven Engineering: ▫ starting ... server reaches saturation at a certain arrival rate (utilization close to 1) ...... Payment. Processing. Return Catalouge. Check out. Shopping cart. Process.
QUASOSS 2010

page 1

Model-based Performance Analysis of Service-Oriented Systems Dorina C. Petriu Carleton University Department of Systems and Computer Engineering Ottawa, Canada, K1S 5B6 http://www.sce.carleton.ca/faculty/petriu.html

QUASOSS 2010

page 2

Analysis of Non-Functional Properties 

Model-Driven Engineering enables the analysis of non-functional properties (NFP) of software models  





Approach:  





examples of NFPs: performance, scalability, reliability, security, etc. many existing formalisms and tools for NFP analysis  queueing networks, Petri nets, process algebras, Markov chains, fault trees, probabilistic time automata, formal logic, etc. research challenge: bridge the gap between MDD and existing NFP analysis formalisms and tools rather than „reinventing the wheel‟ add additional annotations for expressing different NFPs to software models define model transformation from annotated software models to different NFP analysis models using existing solvers, analyze the NFP models and give feedback to designers

In the UML world: define extensions as UML Profiles for expressing NFPs 



UML Profile for Schedulability, Performance and Time (SPT) UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE)

QUASOSS 2010

page 3

Software performance evaluation in MDE Model-to-code Transformation

Software Code

Code generation

UML Tool

UML + MARTE Software Model

Model-to-model Transformation

Feedback to designers



Performance Model

Performance Analysis Results

Performance Analysis Tool

Performance evaluation

Software performance evaluation in the context of Model-Driven Engineering:  starting point: UML software model used also for code generation  add performance annotations (using the MARTE profile)  generate a performance analysis model  queueing networks, Petri nets, stochastic process algebra, Markov chain, etc.  solve analysis model to obtain quantitative results  analyze results and give feedback to designers

QUASOSS 2010

page 4

Transformation Target: Performance Models

QUASOSS 2010

page 5

Performance modeling formalisms 

Analytic models 









Queueing Networks (QN)  capture well contention for resources  efficient analytical solutions exists for a class of QN (“separable” QN): possible to derive steady-state performance measures without resorting to the underlying state space. Stochastic Petri Nets  good flow models, but not as good for resource contention  Markov chain-based solution suffers from state space explosion Stochastic Process Algebra  introduced in mid-90s by merging Process Algebra and Markov Chains Stochastic Automata Networks  communicating automata synchronized by events; random execution times  Markov chain-based solution (corresponds to the system space state)

Simulation models  

less constrained in their modeling power, can capture more details harder to build and more expensive to solve (running the model repeatedly).

QUASOSS 2010

page 6

Queueing Networks (QN) 

Queueing network model = a directed graph:     



nodes are service centres, each representing a resource; customers, representing the jobs, flow through the system and compete for these resources; arcs with associated routing probabilities (or visit ratios) determine the paths that customers take through the network. used to model systems with stochastic characteristics multiple customer classes: each class has its own workload intensity (the arrival rate or number of customers), service demands and visit ratios bottleneck service center: saturates first (highest demand, utilization) Open QN system

Closed QN system

out

Disk 1

.. .

CPU

Disk 1 CPU

Disk 2

Terminals Disk 2

QUASOSS 2010

page 7

Single Service Center: Non-linear Performance Typical non-linear behaviour for queue length and waiting time  





server reaches saturation at a certain arrival rate (utilization close to 1) at low workload intensity: an arriving customer meets low competition, so its residence time is roughly equal to its service demand as the workload intensity rises, congestion increases, and the residence time along with it as the service center approaches saturation, small increases in arrival rate result in dramatic increases in residence time.

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

Residence Time

Utilization

Queue length

Residence Time

Utilization 20

20

15

15

Queue length



10 5

0.2

0.4

Arrival Rate

0.6

0.8

5 0

0 0

10

0

0.2

0.4

Arrival rate

0.6

0.8

0

0.2

0.4

Arrival rate

0.6

0.8

QUASOSS 2010

page 8

Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation



LQN is a extension of QN 









models both software tasks (rectangles) and hardware devices (circles) represents nested services (a server is also a client to other servers) software components have entries corresponding to different services arcs represent service requests (synchronous, asynchronous and forwarding) multi-servers used to model components with internal concurrency

clientE ClientT Client CPU service1

service2 Appl

entries

query1

task

query2

device

Appl CPU

DB DB CPU

Disk1

Disk2

QUASOSS 2010

page 9

LQN extensions: activities, fork/join 1..m Local Wks

Local Client

1..n e1

e2

e4

e3

e5

Web Server

&

&

e6

Secure Proc

SDisk

Internet

eComm Server

a3

eComm Proc

a4 [e4]

Secure DB

Remote Wks

Web Proc

a1

a2

Remote Client

e7

Disk

DB

DB Proc

QUASOSS 2010

page 10

Performance versus Schedulability 

Difference between performance and schedulability analysis 





Statistical performance results (analysis outputs):  





mean (and variance) of throughput, delay (response time), queue length resource utilization probability of missing a target response time

Input parameters to the analysis - also probabilistic:  





performance analysis: timing properties of best-effort and soft real-time systems  e.g., information processing systems, web-based applications and services, enterprise systems, multimedia, telecommunications schedulability analysis: applied to hard real-time systems with strict deadlines  analysis often based on worst-case execution time, deterministic assumptions

random arrival process random execution time for an operation probability of requesting a resource

Performance models represents a system at runtime 

must include characteristics of software application and underlying platforms

QUASOSS 2010

page 11

UML Profiles for performance annotations: SPT and MARTE

QUASOSS 2010

page 12

UML SPT Profile Structure General Resource Modeling Framework «profile» RTresourceModeling «import»

«import»

«profile» RTconcurrencyModeling

«import»

«profile» RTtimeModeling

«import» «import» Analysis Models «profile» SAProfile

«import» «profile» RSAprofile

Infrastructure Models «profile» PAprofile

«modelLibrary» RealTimeCORBAModel

QUASOSS 2010

page 13

SPT Performance Profile: Fundamental concepts 

Scenarios define execution paths with externally visible end points. 



Each scenario is executed by a workload:  



 

open workload: requests arriving at in some predetermined pattern closed workload: a fixed number of active or potential users or jobs

Scenario steps: the elements of scenarios joined by predecessor-successor relationships which may include forks, joins and loops. 



QoS requirements can be placed on scenarios.

a step may be an elementary operation or a whole sub-scenario

Resources are used by scenario steps. Quantitative resource demands for each step must be given in performance annotations. The main reason for building performance models is to compute additional delays due to the competition for resources! Performance results include resource utilizations, waiting times, response times, throughputs. Performance analysis is applied to real-time systems with stochastic characteristics and soft deadlines (use mean value analysis methods).

QUASOSS 2010

page 14

SPT Performance Profile: the domain model Resources

PerformanceContext

Workload

1

0..n

1

1..n

1..n

Workload

1..n

1

1..n 0..n

PScenario hostExecDemand responseTime

responseTime priority

0..n

utilization schedulingPolicy throughput

0..n

1 +root

ClosedWorkload population externalDelay

OpenWorkload occurencePattern +successor

{ordered} 1..n

1

PStep probability repetition delay operations interval executionTime +predecessor

Scenario/ Step

PResource

+host

0..1

PProcessingResource processingRate contextSwitchTime priorityRange isPreeemptible

PPassiveResource waitingTime responseTime capacity accessTime

QUASOSS 2010

page 15

MARTE overview

MARTE domain model

MarteFoundations

MarteDesignModel

Specialization of MARTE foundations for modeling purpose (specification, design, etc.):  RTE model of computation and communication  Software resource modeling  Hardware resource modeling

Foundations for modeling and analysis of RT/E systems :  CoreElements  NFPs  Time  Generic resource modeling  Generic component modeling  Allocation

MarteAnalysisModel

Specialization of MARTE foundations for annotating models for analysis purpose:  Generic quantitative analysis  Schedulability analysis  Performance analysis

QUASOSS 2010

page 16

GQAM dependencies and architecture   

GQAM (Generic Quantitative Analysis Modeling): Common concepts for analysis SAM: Modeling support for schedulability analysis techniques. PAM: Modeling support for performance analysis techniques. NFPs

Time

GRM

« modelLibrary » MARTE_Library

« import »

« import »

« import »

« import »

GQAM

GQAM_Workload

« import »

« import »

GQAM_Resources

GQAM_Observers

« import »

« import »

SAM

PAM

QUASOSS 2010

page 17

Annotated deployment diagram commRcvOvh and commTxOvh are host-specific costs of receiving and sending messages

blockT describes a pure latency for the link

«commHost»

«commHost»

internet:

lan:

{blockT = (100,us)}

{blockT = (10,us), capacity = (100,Mb/s)}

«execHost»

«execHost»

webServerHost:

ebHost: {commRcvOverhead = (0.15,ms/KB), commTxOverhead = (0.1,ms/KB), resMult = 5} «deploy»

«artifact» ebA «manifest» : EBrowser

{commRcvOverhead = (0.1,ms/KB), commTxOverhead = (0.2,ms/KB)}

resMult = 5 describes a symmetric multiprocessor with 5 processors

«deploy»

«artifact» webServerA «manifest» : WebServer

«execHost»

dbHost: {commRcvOverhead = (0.14,ms/KB), commTxOverhead = (0.07,ms/KB), resMult = 3}

«deploy»

«artifact» databaseA «manifest» : Database

QUASOSS 2010

page 18

Simple scenario initial step is stereotyped for workload (open), execution demand request message size

«PaRunTInstance» eb: EBrowser

a swimlane or lifeline stereotyped «PaRunTInstance» references a runtime active instance; poolSize specifies the multiplicity

«PaRunTInstance» webServer: WebServer

«PaRunTInstance» database: Database

{poolSize = (webthreads=80), instance = webserver}

{poolSize = (dbthreads=5), instance = database}

1:

«PaWorkloadEvent» {open (interArrT=(exp(17,ms))),

«PaStep» {hostDemand = (4.5,ms)}

«paCommStep»

2:

«paStep» «paCommStep» {hostDemand = (12.4,ms), rep = (1.3,-,mean), msgSize = (2,KB)}

3: 4:

«PaCommStep»

{msgSize = (75,KB)}

«paCommStep»

{msgSize = (50,KB)}

QUASOSS 2010

page 19

Transformation Principles from SModels to PModels

QUASOSS 2010

page 20

UML model for performance analysis 

For performance analysis, a UML model should contain: 

Key use cases realized by representative scenarios • frequently executed, with performance constraints



Resources used by each scenario 

resource types: active or passive, physical or logical, hardware or software • examples: processor, disk, process, software server, lock, buffer



quantitative resource demands for each scenario step • how much, how many times?



Workload intensity for each scenario open workload: arrival rate of requests for the scenario  closed workload: number of simultaneous users 

QUASOSS 2010

page 21

Direct UML to LQN Transformation: our first approach 

Mapping principle:  



Generate LQN model structure (tasks, devices and their connections) from the structural view:  



software and hardware resources → service centres scenarios → job flow from centre to centre

active software instances → LQN tasks map deployment nodes → LQN devices

Generate LQN detailed elements (entries, phases, activities and their parameters) from the behavioural view: 





identify communication patterns in key scenarios due to architectural patterns  client/server, forwarding server chain, pipeline, blackboard, etc. aggregate scenario steps according to each pattern and map to entries, phases, etc. compute LQN parameters from resource demands of scenario steps.

QUASOSS 2010

page 22

Generating the LQN model structure a) High-level architecture Client Server

Client Server

CLIENT SERVER

CLIENT SERVER Client 1..n

Server

Client

Database ProcS

b) Deployment



WebServer

Internet

LAN

ProcCN

Client1 User1

Modem

ProcC



WebServer Server

ProcC1

Client

Server



Client User

Generated LQN model structure

ClientN UserN

Database ProcDB

Disk1



ProcS WebServer Server



ProcDB Database

• Software tasks generated for high-level software components according to the architectural patterns used. • Hardware tasks generated for devices from deployment diagram

Disk1

QUASOSS 2010

page 23

Client Server Pattern 

Structure 



the participants and their relationship

Behaviour 

Synchronous communication style - the client sends the request and remains blocked until the sender replies Client Sever

ClientServer Client

1..n

1

Client

Server

request service

waiting

Server wait for reply

continue work

a) Client Sever collaboration

serve request and reply complete service (opt)

b) Client Sever behaviour

QUASOSS 2010

page 24

Mapping the Client Server Pattern to LQN Client User

WebServer Server

LQN

waiting waiting

request request request service service

e1

Client

[ph1] wait for reply

continue continue work work

and reply e2, ph1

Client CPU

...

e1, ph1

serve request

e2 e2, ph2 complete complete

[ph1, ph2]

Server

service (opt)

Server CPU

For each subset of scenario steps mapped to a LQN phase or activity, compute the execution time S: S = Si=1,n ri si where ri = number of repetitions and si = execution time of step i.

QUASOSS 2010

page 25

Identify patterns in a scenario UserInterface

ECommServ

browse and select items

phase 1

DBMS

LQN

idle {PArep= $r}

check valid item code add item to query

waiting

phase 1

{PAdemand= ‘assm’, ’mean’, $md1, ‘ms’}

e1 [ph1]

User Interface

e2 [ph1]

EComm Server

{PAdemand= ‘assm’, ’mean’, $md2, ‘ms’} {PAdemand= ‘assm’, ’mean’, $md3, ‘ms’}

sanitize query

idle e3

phase 1

waiting

add to invoice

generate page

phase 2

display {PAdemand= ‘assm’, ’mean’, $md3, ‘ms’}

log transaction

[ph1, ph2]

DBMS

QUASOSS 2010

page 26

Transformation using a pivot language

QUASOSS 2010

page 27

Pivot languages 



Pivot language, also called a bridge or intermediate language, can be used as an intermediary for translation. Avoids the combinatorial explosion of translators across every combination of languages. Examples of pivot languages for performance analysis:  Core Scenario Model (CSM)  Klaper  PMIF + S-PMIF  Paladio



Transformations from N source languages to M target languages require N*M transformations.



L1

L’1

L2 ... LN

L’2 ... L’M

Using a pivot language, only N+M transformations. Also, a smaller semantic gap L’1

L1 L2 ... LN

Lp

L’2 ... L’M

QUASOSS 2010

page 28

Core Scenario Model  

CSM: a pivot Domain Specific Language used in the PUMA project at Carleton University (Performance from Unified Model Analysis) Semantically – between the software and performance domains  

focused on scenarios and resources performance data is intrinsic to CSM  quantitative resource demands made by scenario steps  workload PUMA Transformation chain UML+SPT

UML+MARTE UCM

LQN CSM

QN Petri Net Simulation

QUASOSS 2010

page 29

CSM metamodel 

Basic scenario elements, closely based on the SPT Performance Profile 







steps with components and hosts (Processor resources)  Step refined as a sub-scenario precedence among Steps  sequence, branch, merge, fork, join, loop resources and acquire/release operations on them  inferred for Component-based resources (Processes)

Four kinds of resources in CSM:  

 

ProcessingResource (a node in a deployment diagram) ComponentResource (process, or active object)  component in a deployment  lifeline in SD correspond to a runtime component  Swimlane in AD LogicalResource (declared as GRMresource) extOp resource - implied resource to execute external operations

QUASOSS 2010

procOneImage page 30

Start ResAcq

CSM example: Acquire Video scenario

getBuffer

ResAcq

allocBuf Component

Acquire Proc

Start/end Steps Res. Acquire/Release Connectors Processor Resorce

ResAcq

ResRel

getImage Passive passImage

ExtOp

Resource

network

Buffer

Fork

Fork

ResRel

ResAcq

End

storeImage

Component

Buffer Manager

store

Applic CPU

ResAcq Component

Database

Component

writeImg

StoreProc ExtOp

ResRel

Component Resource

Database freeBuf Processing Resource

ResAcq

DB CPU releaseBuf

Logical Resource

Buffer

Processing Resource

Applic CPU

extOp

ResRel

ResRel

ResRel

write End

write Block

QUASOSS 2010

VideoController context Start

page 31 GetImage context

ResAcq

Resource Context

getBuffer

BufferManager context ResAcq

Buffer context

ResAcq

Buffer Manager

allocBuf

Component

Component

GetImage









Resource context: covers the subgraph of steps holding a resource Important for the transformation from CSM to a performance model Processor resources are inferred from deployment Non-nested resource contexts: 

e.g.; the handover of the buffer

ResRel

getImage Passive Resource

ExtOp passImage

network

Buffer

Fork

ResRel

ResRel

End

ResRel

StoreProc context

ResAcq

storeImage

store

ResAcq Component

Database

Database context

Component

StoreProc

writeImg

ResRel

ExtOp write Block

freeBuf

ResAcq

releaseBuf

ResRel

Component

Buffer Manager

ResRel

BufferManager context ResRel

End

QUASOSS 2010

page 32

PUMA approach 

PUMA project: Performance from Unified Model Analysis Software model with performance annotations (Smodel)

Extract CSM from Smodel (S2C)

Convert CSM to some performance model language (C2P)

Improve Smodel

Performance results and design advice

Core Scenario Model (CSM)

Explore solution space

Performance model (Pmodel)

QUASOSS 2010

page 33

Extending PUMA for SOA

QUASOSS 2010

page 34

PUMA4SOA Deployment Diagram of Primary model

Middleware Aspect Model Transformation to CSM

PIM

PSM

SOA system model with performance annotations

SOA system model with performance annotations

Feedback

Performance

Results

 

Core Scenario Model (CSM)

Transformation from CSM to P erformance Model

Explore Solution space

Performance model

Layered software model: business processes invoking services PIM to PSM: platform modelled as aspects

QUASOSS 2010

page 35

Eligibility Referral System: Process Model

QUASOSS 2010

Service Architecture Model

page 36

QUASOSS 2010

Service Behaviour Model

join points for platform aspects

page 37

QUASOSS 2010

page 38

Deployment of the primary model

Admission node

Insurance node Transferring node

QUASOSS 2010

page 39

Generic Aspect Model: Service Request Invocation (deployment view) 

The platform – deployed middleware - offers its own services to the applications 

 

e.g., service invocation, service response

Platform services are represented as generic aspect model to be woven into the primary model Similar with the “completion” technique used in existing work to add platform details.

Client node

Provider node

QUASOSS 2010

page 40

Generic aspect model: Service Request Invocation (behavioral view)

unmarshaling (from XML) marshaling (to XML) SOAP message

QUASOSS 2010

page 41

Generic aspect model: Service Response (behavioral view)

unmarshaling (from XML) marshaling (to XML) SOAP message

QUASOSS 2010

page 42

Binding generic to concrete resources  

Generic names (parameters) are bound to concrete names corresponding to the context of the join point Sometime new resources are added to the primary model

QUASOSS 2010

page 43

Binding performance annotation variables  

Annotation variables allowed in MARTE are used as generic performance annotations Bound to concrete reusable platform-specific annotations

QUASOSS 2010

page 44

PSM: scenario after composition

composed service invocation aspect

composed service response aspect

QUASOSS 2010

page 45

CSM example for Eligibility Referral 

CSM models are automatically generated from UML+SPT or UML +MARTE models 





from behaviour diagrams (Sequence or Activity) and deployment diagrams

Aspect weaving can be implemented at:   

UML level (as shown) CSM level LQN level

QUASOSS 2010

page 46

Aspect composition in CSM Annotated UML (primary) Extract

Annotated UML (aspect)



(PUMA tools)

CSM (primary) Compose

CSM (aspect)

CSM (composed)

Convert (PUMA tools)

Performance Model (LQN)

Why composing in CSM? 



UML has a complex metamodel  current tools make it awkward to add/manipulate performance annotations  different behaviour representations (e.g., activities and interactions) working with CSM makes sense for performance analysis  behaviour and resources in the same model  unified view of behaviour from different UML representations

QUASOSS 2010

page 47

Aspect composition at LQN level a) Primary model

b) Composed aspects based on forwarding

c) Composed aspects based on rendezvous

c) Aggregating midleware tasks in the composed model

QUASOSS 2010

page 48

Eligibility Referral System: LQN Model

QUASOSS 2010

page 49

Finer service granularity a) Finer granularity: Response time Vs. # of Users

b) Finer granularity: Throughput vs. # of Users

100

0.0035 80

0.003

A

A

B

70

C

60

D E

50 40 30

B

0.0025

Throughput

Response time (sec.)

90

C D

0.002

E

0.0015 0.001

20

0.0005

10 0

0 0

20

40

60

# of Users

 

 



80

100

120

0

20

40

60

80

100

120

# of Users

A: The base case; multiplicity of all tasks and hardware devices is 1, except for the number of users. Transferring processor is the system bottleneck. B: Resolve bottleneck by increasing the multiplicity of the bottleneck processor node to 4 processors. Only slight improvement because the next bottleneck - the middleware MW_NA - kicks in. C: The software bottleneck is resolved by multi-threading MW_NA. D: Increasing to 2 the number of disks units for Disk1 and adding additional threads to the next software bottleneck tasks, dm1 and MW-DM1. The throughput goes up by 24% with respect to case C. The bottleneck moves to DM1 processor. E: Increasing the number of DM1 processors to 2 has a considerable effect.

QUASOSS 2010

page 50

Coarser service granularity c) Coarser service granularity: Response time vs. # of Users

d) Coarser service granularity: Throughput vs. #of Users

80

0.0035

70 60

A

50

B C

40

D

30

0.0025

Throughput

Response time (sec)

0.003

A

0.002

B C

0.0015

D

0.001

20 0.0005

10 0

0

0

0

20

40

60

# of Users

  



80

100

120

20

40

60

80

100

120

# of Users

A: The base case. The software task dm1 is the initial bottleneck. B: The software bottleneck is resolved by multi-threading dm1. The response time is reduced slightly and the bottleneck moves to Disk1. C: Increasing to 2 the number of disks units for Disk1 has a considerable effect. The maximum throughput goes up by 60% with respect to case B. The bottleneck moves to the Transferring processor. D: Increasing the multiplicity of the Transferring processor to 2 processors, and adding additional threads to the next software bottleneck task MW-NA; the throughput grows by 11 %.

QUASOSS 2010

page 51

Coarser versus finer service granularity f) Finer and Coarser service granularity: Throughput vs. # of Users

e) Finer and Coarser service granularity: Response time vs, # of Users

0.0035

60 0.003

Throughput

Response time (sec)

50 40 30

0.002 0.0015

20

0.001 Finer service granularity

10

Finer service granularity

0.0005

Co arser service granularity

Co arser service granularity

0

0 0

20

40

60

# of Users



0.0025

80

100

120

0

20

40

60

80

100

# of Users

Difference between the D cases the two alternatives. The compared configurations are similar in number of processors, disks and threads, except that the latter performs fewer service invocations through the web service middleware.

120

QUASOSS 2010

page 52

Analyzing Performance Effects of SOA Patterns

QUASOSS 2010

page 53

Metamodel for change propagation 1

SModel

mapping

1

PModel *

has

* Problem solvedBy

*

* SElement affectedSElem

*

mapping

*

*

PElement

affectedPElem

*

applyTo

SOA Pattern 1 Solution Description

SModel Change *

determines * Application 1 Rule

mapping

1 PModel Change

* Parameters

Task Replication Multiplicity Change Redeployment ShrinkExec

MoreAsynch Partitioning

* Parameters

Batching

QUASOSS 2010

page 54

Approach for incremental change propagation from SModel to PModel based on SOA patterns SModel

Non Functional Properties Analysis

SOA Pattern

Model Transformation

Non Functional Properties Analysis Results

Performance Analysis Results

Identify the Problem and Choose Pattern

Extracting the Application Rules

Application rules

Determining the changes

PModel

Performance Analysis

Applying Changes To PModel

Affected PElements

Affected SElements

Pmodel Changes

Identifying Affected PElements by Pattern

QUASOSS 2010

page 55

Examples of SOA Patterns “Functional Decomposition “ SOA Pattern : Applications instruction: Depending on the nature of the large problem, a service oriented process can be created to cleanly deconstruct it into smaller problems. Application Rule : Condition: If there is a large service with multiple sub-functions in the system Actions: Split the service into smaller services.

“Asynchronous Queuing “ SOA Pattern : Applications instruction: Portion of a service that can be processed without locking the requestor should be identified and postponed to after an intermediate response message to the requestor. Application Rule :

Condition: If there is any service with a long processing time in the system Actions: Provide the requestor with an intermediate response and postpone the rest of processing after sending the intermediate response.

QUASOSS 2010

page 56

SModel example: Browsing sequence diagram User

Shopping and Browsing Services

Catalogue Service

Browse Catalogue

DB Server

Filter Products Create Product Catalogue

Products Format into Catalogue

Send Catalogue

Return Products Catalouge

Check out Shopping cart

Order Confirmation

QUASOSS 2010

page 57

SModel example: Shopping sequence diagram User

Shopping and Browsing Services

Order Processing Service

Payment Processing

Browse Catalogue Return Catalouge

Check out Shopping cart

Place Order Validate Payment Info Process Payment

Order Confirmation Order Placed

Confirm Payment

QUASOSS 2010

page 58

LQN model before and after the first pattern User [z=30s]

User [z=30s]

users usersp

1 Entrynet [pure delay 1 ms]

users usersp

1 Entrynet [pure delay 1 ms]

Network

Network Networkp

Networkp

0.5

0.5

EntryServ1 [s=0.2ms]

1.5

0.5

3

1.5

EntryPay1 [s=0.8ms]

EntryPay2 [s=0.4ms]

EntryServ2 [s=0.5ms]

Shopping &Browsing Service Shop&B rowP

2 Order Processing Service

EntryServ1 [s=0.2ms]

EntryCT [s=1s]

Product Service 1.5

0.5 1.5

EntryVisa [s=100ms]

Entry MasterCard [s=100 ms]

Payment Processing

CTp

Orderp

EntryDCT [s=50ms]

EntryVisa [s=100ms]

0.5

Shopping Service

3

1.5

EntryPay1 [s=0.8ms]

EntryPay2 [s=0.4ms]

EntryServ2 [s=0.5ms]

ShopP

Order Processing Service

Browsing Service

BrowP

2 EntryCT [s=1s]

Product Service

0.5 Entry MasterCard [s=100 ms]

1.5 Payment Processing

EntryDCT [s=50ms]

DB

CTp

Orderp

DB

Payp

Payp DBp

DBp

QUASOSS 2010

page 59

Results   

Assess the performance effectiveness of applying SOA patterns Three cases: 1) before; 2) after the first; 3) after the second pattern First pattern Functional Decomposition:  



aims to manage the software complexity big performance improvement effect because it affects the software bottleneck task

Second pattern Asynchronous Queueing:  

Aims to improve performance Practically no performance effect because it‟s applied to a performance non-critical task.

QUASOSS 2010

page 60

Conclusions 

Integrating performance analysis with model-driven development of service-oriented systems has many potential benefits  







For service consumers: how to chose the “best” services available For service providers: how to design and configure their systems to optimize the use of resources and meet performance requirements For software developers: analyze design and configuration alternative, evaluate tradeoffs For performance analysts: automate the generation of PModels from SModels to keep them in synch, reuse platform performance annotations

A lot more to do, theoretically and practically 





merging performance modeling and measurements  use runtime monitoring data for better performance models  use performance models to support runtime changes (autonomic systems) applying variability modeling to service-oriented systems  manage runtime changes, adapt to context provide ideas and background for building better tools.