ANNEX B Terms of Reference Extension of the Web Infrastructure ...

18 downloads 81 Views 2MB Size Report
Intranet Expansion of the Web Infrastructure and implementation of SLA for the ... R8. The former is used for the servers and load balancers while the latter is ...
ANNEX B

Terms of Reference Extension of the Web Infrastructure for Intranet Services:

1|Page RFQ 2009-0267/AWADE

Table of Contents 1.

INTRODUCTION ............................................................................................................................... 3

2.

Background ..................................................................................................................................... 4

3.

Current Configuration and Security Approach: .............................................................................. 5

4.

5.

6.

3.1

Hardware ................................................................................................................................ 6

3.2

Rack allocation – 2.L11............................................................................................................ 8

3.3

Rack allocation – 1.R8 ............................................................................................................. 9

3.4

Ethernet wiring ..................................................................................................................... 10

3.5

Cluster interconnection ........................................................................................................ 14

3.6

FC wiring................................................................................................................................ 16

Current Drawback and proposed solution .................................................................................... 18 4.1

Hardware .............................................................................................................................. 19

4.2

Rack allocation – 1.R8 ........................................................................................................... 20

4.3

Ethernet wiring ..................................................................................................................... 21

4.4

Additional Cluster interconnection ....................................................................................... 22

4.5

FC wiring................................................................................................................................ 22

Service Level Agreement............................................................................................................... 24 5.1

Background ........................................................................................................................... 24

5.2

Features: ............................................................................................................................... 24

5.2.1.

Users ............................................................................................................................. 24

5.2.2.

User Actions .................................................................................................................. 24

5.2.3.

Document Flow ............................................................................................................. 24

5.2.4.

Layout and Repository .................................................................................................. 24

5.2.5.

Reporting....................................................................................................................... 24

Deliverables................................................................................................................................... 25 6.1

Installation and Configuration .............................................................................................. 25

6.2

SLA ......................................................................................................................................... 25

2|Page RFQ 2009-0267/AWADE

1.

INTRODUCTION

The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (hereinafter referred to as the “Commission”) is the international organization setting up the global verification system foreseen under the Comprehensive Nuclear-Test-Ban Treaty (hereinafter referred to as the “CTBT”), which is the Treaty banning any nuclear weapon test explosion or any other nuclear explosion. The Treaty provides for a global verification regime, including a network of 321 stations worldwide, a communications system, an International Data Centre and on-site inspections to monitor compliance. The Headquarters of the Preparatory Commission is in Vienna (Vienna International Centre of United Nations) Austria. One fundamental task of the Commission’s International Data Centre is to provide States Parties with equal, open, timely and convenient access to agreed products and services to support their national CTBT verification requirements. An integral component of the distribution mechanism is the use of web technology. The purpose of this document is to request for proposals with respect to the: Intranet Expansion of the Web Infrastructure and implementation of SLA for the Web Infrastructure

3|Page RFQ 2009-0267/AWADE

2.

Background

A Web Infrastructure has been deployed by the PTS to host web related services for different user communities: Internet Users: Users in the general public domain for services available via the internet. (e.g. http://www.ctbto.org. The Public website of the PTS) Intranet Users: Users within the Internal PTS LAN, mostly staff members, GTAs and Consultants.(E.g http://Intranet.ctbto.org) Extranet Users: Known external users like PMO members or Member State’s participants for services that are not available to the general public. (E.g http://ecs.ctbto.ogr) References: Current Web Infrastructure System Design Specification available upon request

4|Page RFQ 2009-0267/AWADE

3.

Current Configuration and Security Approach:

There are four physical servers in the current Web Infrastructure. (2 for all production and 2 for Test and Development Services). Services for each category of user are hosted separate virtual machines with the following configuration:

5|Page RFQ 2009-0267/AWADE

3.1

Hardware

The hardware used by the web infrastructure is located in two separate racks: 2.L11 and 1.R8. The former is used for the servers and load balancers while the latter is used for storage and management console. The hardware in the rack 2.L11 consists of:  Two load balancers in HA configuration (model BIG-IP LTM 6400)  Four servers (model Sun Fire X4600-M2)  Two Gigabit Ethernet switches providing TCP/IP interconnectivity in HA configuration The hardware in the rack 1.R8 consists of:  SAN storage (model Clariion CX-20c)  Two Fibre Channel switches (EMC Connectrix DS-440M)  One management console The following table contains detailed hardware specification:

Devices

Location

Hardware description

wilb01

2.L11

Load balancers

wilb02

Manufacturer: F5 Model: BIG-IP LTM 6400-RS Memory: 4GB Modules: Local Traffic Manager, Web Accelerator, Performance Pack Plus Operating system: TMOS 9.4.4

wias01

2.L11

Servers

wias02

Manufacturer: Sun Microsystems

wias03

Model: Sun Fire X4600-M2

wias04

Memory: 32GB Disk: 4x146GB Processors: 4x AMD Opteron Model 8222 2.8GHz HBA: 2x QLogic 2460-CK Ethernet: 1xILOM, 4x onboard 1Gbit Ethernet, 2x dual port 1Gb Ethernet card

wisw01

2.L11

Gigabit Ethernet switches

6|Page RFQ 2009-0267/AWADE

wisw02

Manufacturer: Cisco Model: WS-C2960-24TT-L /OS: Cisco 105.12.2(46)SE Ports: 24x 100BASE TX, 2 x 1000 BASE-T

Wist

1.R8

SAN storage Manufacturer: EMC Model: CLARiiON CX-20F Storage space: 12TB (approx) Two storage processors

wifcsw01

1.R8

wifssw02

Fibre Channel switches Manufacturer: EMC Model: Connectrix DS-440M Ports: 16x LC/LC

wimgmt

1.R8

Management Station, HP TFT7600 Rackmount Keyboard and Monitor (TFT7600 RKM) Operating system: Windows XP Embedded Table 1 Detailed hardware description

7|Page RFQ 2009-0267/AWADE

3.2

Rack allocation – 2.L11

Figure 1 Rack front side

Figure 2 Rack back side

8|Page RFQ 2009-0267/AWADE

3.3

Rack allocation – 1.R8

Figure 3 Rack front side

Figure 4 Rack back side

9|Page RFQ 2009-0267/AWADE

3.4

Ethernet wiring

The Ethernet wiring has been done along the concepts of the following VLANs:

     

WI-ADMIN VLAN connects management ports and access host operating system on the servers. WI-PUBLIC contains public IP addresses that belong to DMZ 74. WI-TEST VLAN is the private network that carries traffic between load balancers and virtual machines on test and development servers for services deployed for internet and extranet users only. WI-APPS VLAN is the private network that carries traffic between the load balancers and virtual machines on the production servers for services deployed for internet and extranet users only. WI-INTRAPPS VLAN is the private network that carries traffic between the load balancers and virtual machines on the production servers for services deployed for intranet users only. WI-INTRATEST VLAN is the private network that carries traffic between the load balancers and virtual machines on the test servers for services deployed for intranet users only.

The following rules have been used:

    

All storage equipment (located in rack 1.R8) is connected to WI-ADMIN VLAN through switches in 2.L11 wiasNN servers have NET0 and NET1 interfaces bonded to WI-ADMIN VLAN. These interfaces are used to access host operating system only. Production servers have NET2 and NET3 interfaces bonded to WI-APPS and WI_INTRAPPS VLANs. These interfaces are used to access production virtual machines only. Test/development servers have NET2 and NET3 interfaces bonded to WI-TEST and WIINTRATEST VLANs These interfaces are used to access Test and Development virtual machines only. WI-ADMIN does not require high availability, except the production servers

10 | P a g e RFQ 2009-0267/AWADE

Figure 5 Ethernet wiring in 2.L11

Figure 6 Ethernet wiring between 1.R8 and 2.L11

11 | P a g e RFQ 2009-0267/AWADE

Cable colours that are displayed in Figure 5 are chosen to depict the actual cable colours used in the system. This may change in the future if some of the cables require replacement (e.g. faulty cable).

Rack Hostname Interface Connection Type

Remarks Patch Panel

(Label)

(see “Legend” for valid types)

2L11 wilb01

mgmt

WI-ADMIN

wisw01-Fa0/13

2L11 wilb01

1.1

WI-PUBLIC

CP3-1A09

2L11 wilb01

1.2

WI-PUBLIC

CP3-1B21

2L11 wilb01

1.5

WI-APPS / WI-INTRAPPS

wisw01-Fa0/1

2L11 wilb01

1.6

WI-APPS / WI-INTRAPPS

wisw02-Fa0/1

2L11 wilb01

1.11

WI-TEST / WI-INTRATEST

wisw01-Fa0/11

2L11 wilb01

1.12

WI-TEST / WI-INTRATEST

wisw02-Fa0/11

2L11 wilb02

mgmt

WI-ADMIN

wisw02-Fa0/13

2L11 wilb02

1.1

WI-PUBLIC

CP3-1A10

2L11 wilb02

1.2

WI-PUBLIC

CP3-1B22

2L11 wilb02

1.5

WI-APPS / WI-INTRAPPS

wisw01-Fa0/2

2L11 wilb02

1.6

WI-APPS / WI-INTRAPPS

wisw02-Fa0/2

2L11 wilb02

1.11

WI-TEST / WI-INTRATEST

wisw01-Fa0/12

2L11 wilb02

1.12

WI-TEST / WI-INTRATEST

wisw02-Fa0/12

2L11 wilb02

1.15

TRUNK

wilb01-1.15

2L11 wilb02

1.16

TRUNK

wilb01-1.15

2L11 wias01

netmgt

WI-ADMIN

wisw01-Fa0/14

2L11 wias01

net0

WI-ADMIN

wisw01-Fa0/15

2L11 wias01

net1

WI-ADMIN

wisw02-Fa0/15

2L11 wias01

net2

WI-APPS / WI-INTRAPPS

wisw01-Fa0/3

name)

(if any)

Socket or Switch Port or CP Outlet

12 | P a g e RFQ 2009-0267/AWADE

2L11 wias01

net3

WI-APPS / WI-INTRAPPS

wisw02-Fa0/3

2L11 wias02

netmgt

WI-ADMIN

wisw02-Fa0/14

2L11 wias02

net0

WI-ADMIN

wisw01-Fa0/16

2L11 wias02

net1

WI-ADMIN

wisw02-Fa0/16

2L11 wias02

net2

WI-APPS / WI-INTRAPPS

wisw01-Fa0/4

2L11 wias02

net3

WI-APPS / WI-INTRAPPS

wisw02-Fa0/4

2L11 wias03

netmgt

WI-ADMIN

wisw01-Fa0/17

2L11 wias03

net0

WI-ADMIN

wisw01-Fa0/18

2L11 wias03

net1

WI-ADMIN

wisw02-Fa0/18

2L11 wias03

net2

WI-TEST / WI-INTRATEST

wisw01-Fa0/9

2L11 wias03

net3

WI-TEST / WI-INTRATEST

wisw02-Fa0/9

2L11 wias04

netmgt

WI-ADMIN

wisw02-Fa0/17

2L11 wias04

net0

WI-ADMIN

wisw01-Fa0/19

2L11 wias04

net1

WI-ADMIN

wisw02-Fa0/19

2L11 wias04

net2

WI-TEST / WI-INTRATEST

wisw01-Fa0/10

2L11 wias04

net3

WI-TEST / WI-INTRATEST

wisw02-Fa0/10

2L11 wisw01

Fa0/23

TRUNK

CP3-1A11

2L11 wisw01

Fa0/24

TRUNK

CP3-1A12

2L11 wisw02

Fa0/23

TRUNK

CP3-1B23

2L11 wisw02

Fa0/24

TRUNK

CP3-1B24

2L11 wisw02

Gi0/1

TRUNK

wisw01-Gi0/1

2L11 wisw02

Gi0/1

TRUNK

wisw01-Gi0/2

1R8

wist-spa

mgmt

WI-ADMIN

wisw01-Fa0/20

1R8

wist-spb

mgmt

WI-ADMIN

wisw02-Fa0/20

1R8

wifcsw01

mgmt

WI-ADMIN

wisw01-Fa0/21

1R8

wifcsw02

mgmt

WI-ADMIN

wisw02-Fa0/21

13 | P a g e RFQ 2009-0267/AWADE

1R8

wimgmt

nic

WI-ADMIN

wisw01-Fa0/22

Table 2 Ethernet interconnection details

All patch cables are connected and show a labelling in accordance with the naming scheme used in the CC:

{Hostname}-{Interface} {Patchpanel Socket}

e.g.: wisw01-fa0/23 CP3-1A11

or, in case of local interconnects/links

{Hostname}-{Interface} {switchname-port}

e.g.: wistspa-mgmt wisw01-fa0/20

3.5

Cluster interconnection

The nodes wias01 and wias02 form a cluster, as well as nodes wias03 and wias04. Therefore it is necessary to have a dedicated heartbeat interconnection between the cluster members. This interconnection has to be reliable, highly available and as fast as possible.

Reliability To be achieved by using direct interconnection using Ethernet cross-over cables. That way the communication between the nodes would not be dependent on the other network cards reducing the possibility of failed communication.

14 | P a g e RFQ 2009-0267/AWADE

High-availability of the connection shall be achieved by using multiple network adapters and multiple cables between the cluster nodes. Speed shall be achieved by “Trunking” the physical interconnections into a single bonded interface. Thus it is expected that the network throughput between the cluster nodes would scale to 4Gbit per second. The heartbeat interconnection shall be used for communication between the cluster nodes, and for possible failover of the virtual machines between the cluster nodes. The failover needs to transfer memory image from one node to the other during migration. Since the memory footprint is in the order of Gigabytes, it is desired that such a transfer is as fast as possible.

Figure 7 Cluster Ethernet interconnection

15 | P a g e RFQ 2009-0267/AWADE

3.6

FC wiring

The Fibre channel wiring also supports HA configuration by utilizing fully redundant paths between each node and the storage. The following concepts were used:

   

Each server contains two HBA (host bus adapters) HBAs are connected to separate PCI eXpress buses (pcie3 slot for the first HBA, pcie6 slot for the second HBA) Each HBA in a single server is connected to different FC switch The switches have redundant interconnections to the storage unit

Figure 8 FC wiring

The following table lists details of the FC interconnections:

Rack Hostname Interface Upstream connection (Label) 2L11 wias01

pcie3

wifcsw01-p2

2L11 wias01

pcie6

wifcsw02-p2

16 | P a g e RFQ 2009-0267/AWADE

2L11 wias02

pcie3

wifcsw01-p3

2L11 wias02

pcie6

wifcsw02-p3

2L11 wias03

pcie3

wifcsw01-p4

2L11 wias03

pcie6

wifcsw02-p4

2L11 wias04

pcie3

wifcsw01-p5

2L11 wias04

pcie6

wifcsw01-p5

1R8

wifcsw01

p0

wifstspa-fc0

1R8

wifcsw01

p1

wifstspb-fc1

1R8

wifcsw02

p0

wifstspa-fc1

1R8

wifcsw02

p1

wifstspb-fc0

Table 3 FC Interconnection details

All patch cables are connected and show a labelling in accordance with the following naming scheme:

{Hostname}-{Interface} {switchname-port} e.g.: wias01-pcie3 wistfc01-p3

17 | P a g e RFQ 2009-0267/AWADE

4.

Current Drawback and proposed solution

The fact that all the services irrespective of targeted user community all share the same physical machines and physical network interfaces poses a security risk in that: for the unlikely event that an internet based attacker gains access to the DOM 0 (host machine), it is possible to compromise the security of internal services. In order to address the main security weakness associated with the current deployment, an “internal” web infrastructure will be deployed. This is essentially a replication the current DMZ deployment to be physically connected to the internal LANs rather than to the current web infrastructure switches. Services and Virtual Machines existing on WI-INTRAPPS and WI-INTRATEST will be moved to this environment. Two HP DL785 Servers from an earlier project are being replaced with two smaller HP (Proliant DL385) servers to be used for this purpose. The new Design will have the following components and additional configuration

18 | P a g e RFQ 2009-0267/AWADE

4.1

Hardware

The hardware in the rack 1.R8 will now consist of:    

SAN storage (model Clariion CX-20c) Two Fibre Channel switches (EMC Connectrix DS-440M) One management console Two intranet servers (Model: HP ProLiant DL 785 G5)

The following table shows the detailed specification for the new hardware Devices

Location

Hardware description

wias05

1.R8

Servers

wias06

Manufacturer: Hewlett-Packard Model: ProLiant DL785 G5 Memory: 128GB Disk: 8x146GB Processors: 8 x AMD Opteron Model 8354 2.2GHz HBA: HP 82Q 8Gb Dual Port PCI-e FC HBA Ethernet: 1 x iLO2, 2 x onboard 1Gbit Ethernet, 4 x Dual port 1Gb Ethernet card Table 4 Detailed hardware description

19 | P a g e RFQ 2009-0267/AWADE

4.2

Rack allocation – 1.R8

Figure 9 Rack 1R8 front and back side

20 | P a g e RFQ 2009-0267/AWADE

4.3

Ethernet wiring

Figure 10 Ethernet wiring between 1.R8 and 2.L11

Rack Hostname Interface Connection Type (see “Legend” for valid types)

Remarks Patch Panel (if any)

Socket or Switch Port or CP Outlet

name)

(Label)

1R8

wias05

netmgmt WI-INTRAAPPS

wisw01-Fa0/5

1R8

wias05

net0

WI-INTRAAPPS

wisw01-Fa0/6

1R8

wias05

net1

WI-INTRAAPPS

wisw02-Fa0/7

1R8

wias06

netmgmt WI-INTRAAPPS

wisw02-Fa0/5

1R8

wias06

net0

WI-INTRAAPPS

wisw01-Fa0/7

1R8

wias06

net1

WI-INTRAAPPS

wisw02-Fa0/6

21 | P a g e RFQ 2009-0267/AWADE

Table 5 Ethernet interconnection details

4.4 

4.5

Additional Cluster interconnection WI-INTRAAPPS cluster consisting of wias05 and wias06

FC wiring

The Fibre channel wiring also supports HA configuration by utilizing fully redundant paths between each node and the storage. The following concepts were used:

   

Each server contains two HBA (host bus adapters) in wias01 to wias04, while wias05/06 have one dual-port HBA HBAs are connected to separate PCI eXpress buses (pcie3 slot for the first HBA, pcie6 slot for the second HBA) Each port in a single server is connected to different FC switch The switches have redundant interconnections to the storage unit

Figure 11 FC wiring

The following table lists details of the FC interconnections: 22 | P a g e RFQ 2009-0267/AWADE

Rack

Hostname

Interface (Label)

Upstream connection

1R8

wias05

p0

wifcsw01-p6

1R8

wias05

p1

wifcsw02-p6

1R8

wias06

p0

wifcsw01-p7

1R8

wias06

p1

wifcsw02-p7

Table 6 FC Interconnection details

23 | P a g e RFQ 2009-0267/AWADE

5.

Service Level Agreement

5.1

Background

For the Web Infrastructure, the current process of SLA implementation between system administrators and application owners is via a manual process using a MS Word Template (Sample outlined in Annex 1) The PTS would like to implement this as a web form on the Web infrastructure using Alfresco ECM as the underlying data repository:

5.2

Features:

5.2.1.

Users

Roles PTS Infrastructure Administrator Section Chief PTS Infrastructure Supervisor Infrastructure Manager (Currently OptimIT for Web infrastructure) 5.2.2.

User Actions

Web Form to be filled by PTS User; Content as specified in Annex1 Web Form to be approved /rejected by subsequent roles, with comment fields. 5.2.3.

Document Flow

(Email notification and approval): 5.2.4.

Layout and Repository

For each system, a new sub space should be created in Alfresco where all documents related to the system will be stored. 5.2.5.

Reporting

Reports should be available for:  

current implementation status of each system summary of SLA for selected systems

24 | P a g e RFQ 2009-0267/AWADE

6.

Deliverables

6.1

Installation and Configuration

1. Physical connection and configuration of WIAS05 and WIAS06 with all network interfaces as specified above. 2. Installation and configuration of RHEL 5.3 (a) High Availability configuration and RHEL cluster implementation (b) Configuration of XEN Hypervisor with DOM0 and DOMU interfaces in accordance with existing Web Infrastructure design. (c) Network with the following VLANS: (WI-INTRAAPPS and WI-INTRATEST) (d) Storage and Fibre Channel Connection and Configuration (e) Common Services: DNS, NTP and Mail Services (Backend server credentials shall be provided by the PTS) 3. Migration of existing services on WI-INTRAAPPS to the new hosts 4. Integration into the existing Backup Scheme: a.)

Full Image Backup

b.)

R-Sync Backup of all Virtual Machines

5. Installation and Configuration of the following additional software on production and test environments a.) Liferay Portal b.) Alfresco ECMS c.) Geronimo Application Server 6. Documentation: Full documentation and Integration into existing SDD

6.2

SLA 1. Fully Configured Alfresco SLA Application as specified in Section 5 for Test and Production environments

25 | P a g e RFQ 2009-0267/AWADE

Annex1 SAMPLE SLA

26 | P a g e RFQ 2009-0267/AWADE

1. System Description a) A differentiation between 2 phases: Test –Hosting, Promotion to Production with associated requirements b) Application Description PTS SOH system, external access c) Application Function/Purpose Provides SOH information to Station Operators, NDC, authorized users, etc. d) Expected number and types of Users: (Total numbers, type(s) (External, Internal, Registered, Member States), Expected Concurrent Connections ~ 100 users. Several concurrent connections e) Documentation availability: User, Technical (Deployment Guide), Reference Locations http://scimid.idc.ctbto.org/smwiki/index.php?n=Operations.StateOfHealth f)

Training Requirements: Will training be required for deployment, users or technical staff?

Not planned

2. System Software a) Operating system: Linux CentOS 5.2 b) Programming Languages/platform: (If relevant) Java, python c) Application Servers: (If relevant) N/A d) Application Source: Is the full source code available to the organization. If no, who is the owner? Yes. See Wiki pages. 27 | P a g e RFQ 2009-0267/AWADE

e) Database Connections: (Are Separate DB instances required? Data volume) Not initially. f)

Reporting Integration: (Will the application require or be connected to an external reporting platform?)

N/A g) Usage of Content Management Repository/File Services: N/A h) Other applications currently not used in the infrastructure without which the application will not work

3. Network Services and Security a) Expected Traffic, Connectivity and Interface to other systems, Protocol: (HTTP/HTTPS, Others) HTTP, Corba. Low traffic requirements. b) Type of user authentication: (none, Http, proprietary, LDAP variant, NTLM, Windows AD, Others) HTTP c) Availability of Special Security policy for application: (e.g. Password expiration policy)

4. Release, Configuration and change management a) Installation services: How will the application be deployed? (*.war, *.jar, other archives files available—reference documentation). Tar files b) Availability of tools and policies c) Will Licenses be needed for some software components?

28 | P a g e RFQ 2009-0267/AWADE

No licenses needed

5. Support a) Owner: Which Organizational Unit is primarily responsible for the system? Development:IMS/ED Operations: IDC/NDSO b) Application manager: Who is the key user managing the Application Front End (User management, Permissions and Rights, etc) Gonzalo Perez, IMS/ED, ext 6291 c) Alternate: Backup for the application manager Enrique Castillo IDC/NDSO

d) Incident and Problem Management: (Incident Reporting Tool, FAQs, Training material,) Incident Reporting Tool: How will users raise an incident (Telephone, Email, Special Application / other Helpdesk Services)? Are all incidents recorded, tracked, is the progress path traceable by end users? Does the tool allow service requests to be passed on to another layer or support group? First Level Support, Second Level Support etc Which metadata is available for each problem report? (Classifications like Major/Minor incident, Known Error, Impact, Priority, Urgency, Enhancement Requests etc) Are there regular reviews of logged problems, knowledge base etc? e) Software maintenance team: Is there an application support contract for the application(s)? 29 | P a g e RFQ 2009-0267/AWADE

Yes.

6. Operations a) Proposed Roll out date: Test / Production Test: May 2009 Production: Oct 2009 b) Backup Services: What needs to be backed up?How will backup be implemented? How often for which part? What is the acceptable data loss period (current transaction, few hours, days etc.)? Data is a copy of internal SOH server, which is backed up. c) Shut down Procedures and planned outages: (Who should be notified?) The Apps managers, see above. d) Special Application monitoring indicator: (specific page, result set, Queries; Service /ports alive etc.) Ports: 50000, 10010 e) Access Hours (Office hours, 24/7, other restrictions) 24/7 f)

Expected Data Volume, Storage needs and Disk Space Requirements

Not defined yet g) Acceptable Downtime / Availability Requirement Not mission critical

7. Budgetary Contributions to support infrastructure a) Has the unit responsible earmarked a budget to support internal infrastructure?

30 | P a g e RFQ 2009-0267/AWADE

No.

8. Promotion to Production environment a) User Acceptance Test b) Stress Tests c) Security and Intrusion Tests

9. Obligations of the Support Group a) Reports on service levels provided b) Planned outages notifications

31 | P a g e RFQ 2009-0267/AWADE