Constructing a Virtual Networking Environment in a Geo-Distributed

0 downloads 0 Views 785KB Size Report
networks are protocol-based virtual networks (such as VLANs,. VPNs, and VPLSs) and .... VLAN supported that enables different VMs being in differ- ent virtual ...
Proc. of IEEE 5th International Workshop on the Network of the Future, Ottawa, Canada, June 2012

Constructing a Virtual Networking Environment in a Geo-Distributed Programmable Layer-2 Networking Environment (G-PLaNE) Tianyi Xing1 , Xuan Liu2 , Chun-Jen Chung1 , Akira Wada3 , Shingo Ata3 , Dijiang Huang1 , Deep Medhi2 State University, USA, 2 University of Missouri–Kansas City, USA, 3 Osaka City University, Japan

1 Arizona

Abstract—With Cloud Computing technology occupying the majority of future Internet research and development work, research on deploying and extending existing capabilities onto a newly emerging infrastructure becomes more significant. For example, extending the virtual network provisioning capability onto a Geo-distributed programmable layer-2 networking environment (G-PLaNE) is a novel attempt and is different from in a single domain system. In this paper, we aim to illustrate how to construct the virtual networking environment upon our self-designed resource provisioning system consisting of multiple clusters through G-PLaNE. Experimenters and researchers are able to develop and explore their own mechanisms in our platform. Furthermore, a concrete example named Secure and Resilient Virtual Trust Routing (SeRViTR) is given to illustrate how this can be constructed over G-PLaNE.

I. I NTRODUCTION The future cyber system is envisioned to be composed of both Cyber Physical Systems (CPS) in fields to perform communication, networking, and processing functions based on data collected from various probes, and Cyber Virtual Systems (CVS) to mimic the real world functions, perform data mining, learning, analysis, and then provide locationbased, personalized and timely assistance. Usually, we use the terms cluster and domain to represent the CPS and CVS respectively. Domain can be regarded as a set of resources virtually formed as a whole. It can reside within single clusters or across multiple clusters geographically distributed. For example, a domain may refer to a virtual network including 3 VMs located at three different locations. Although tremendous efforts have been devoted on the physical side (i.e., CPS), the current networking environment is prone to gradually shift its functionality from the CPS side to the CVS side due to the high flexibility of virtual environments. Virtual networking has become a representative component/infrastructure of CVS since single virtual machines cannot meet the requirements of many experiment users. It is a computer network that consists, at least in part, of virtual network links. A virtual network link is a link that does not consist of a physical (wired or wireless) connection between two computing devices but is implemented using methods of network virtualization. The two most common forms of virtual networks are protocol-based virtual networks (such as VLANs, VPNs, and VPLSs) and virtual networks that are based on virtual devices (such as the networks connecting virtual machines

inside a hypervisor). In practice, both forms can be used in conjunction to construct a virtualized environment that has some benefits over the physical network environment. After the neonatal stage of Cloud Computing, especially within the past few years, Cloud Computing has shifted its focus from a centralized architecture to a geographically distributed Cloud service architecture [16] to achieve better Quality of Experience (QoE) and enable features like high availability. For example, a service/resource request from a mobile user arriving at the system has to be sent to the master domain to be processed to dispatch to the sub-domain, which has available resources that are close to the user’s geographic location. This demands resources such as computing, storage, and networking be managed in a coherent and systematic fashion. From the application perspective, the available resources should be provided within the same virtual networking environment to reduce the resource management overhead at the application level. Therefore, a flexible, programmable, and scalable point-to-point virtual networking environment is desired. To this end, our research goal is to provide a way to construct a layer-2 virtual network across multiple virtual domains or physical clusters. The presented research is to address the challenges brought by combining issues of virtual networks and geo-distributed architectures as listed below: •



High Availability. High availability refers to the constant ability of the user community to access the system. Resources in a Cloud system are mainly in the form of virtual machines (VMs) that are running and are highly dependent on the physical metal. Resources will probably become unavailable if there are some failures on the hardware level or interferences from neighbor VMs. Thus, it is better to put eggs in separate baskets. A geo-distributed architecture improves the availability to some extent. Data and files can be replicated at different clusters to prevent trouble caused by a single point failure. Intra- & Inter-Domain Communication. Experimenters or users might expect certain types of resources, including virtual resources, connecting with each other across domains. It is relatively easy to construct such a virtual resource set in a single cluster since servers in the same clusters are usually physically connected in layer 2. However, traffic between two VMs in the same



server also need to be sent to the connected physical switch that makes packets exposed, and thus, brings up inefficiency and raises security issues. Furthermore, data and services need to be transferred from one to another sometimes, which is highly dependent on interdomain communication. How to establish a type of highly connected resources and securely transfer the service or data to the destination server without noticeable delay is also another challenge for the existing geographically distributed system. Network Programmability. Current experimenters and researchers are not only looking forward to claiming plain resources to run their services or applications but also a virtual network with deeply programmable capabilities. However, providing programmable capabilities for a virtual network across different domains introduces more difficulty and highly depends on the virtualization level that the service provider provides. To better take advantage of virtual network resources, various networking programmable components, e.g., OpenFlow [10] programmable switch, are expected from the service providers.

The challenges listed above demand a comprehensive approach to construct a programmable virtual networking environment in a geo-distributed fashion. In this paper, we aim to utilize both hardware and software layer-2 virtual networking techniques to construct a virtual networking environment that is called Geo-distributed Programmable Layer-2 Networking Environment (G-PLaNE). G-PLaNE is based on multiple physical/virtual domains among three Universities’ campus research platforms. The G-PLaNE provides a layer-2 programmable virtual network resource provisioning service based on its geo-distributed infrastructure, which addresses the challenges listed above. We also establish a demonstrative virtual networking system SeRViTR [20] to illustrate how to use G-PLaNE. The current research/development status and future directions will be described. The rest of the paper is organized as follows. Section II briefly introduces the overview of our Geo-distributed Programmable Layer-2 Networking Environment called GPLaNE in short, and then illustrates how the virtual networking environment is constructed upon the G-PLaNE system. A case study built upon a G-PLaNE virtual networking environment is given in section III to illustrate how this system can support SeRViTR. Section IV discusses the related work, and finally, the summary and future work is discussed in section V. II. C ONSTRUCTING V IRTUAL N ETWORKING E NVIRONMENT ON G-PL A NE In this section, we present our novel Geo-distributed Programmable Layer-2 Networking Environment (G-PLaNE) upon which a virtual network construction approach is demonstrated. The overall description of the G-PLaNE system is first presented and then the detail of constructing virtual networks in G-PLaNE is illustrated.

OCU UMKC ASU Inter-cluster OpenFlow Switch Router

o To

rs ste r clu the

Network File System (NFS)

Management &Storage Network

Internal Data Networks (VLANs)

Storage Switch

Open Virtual Switches (OVSes)

NetFlow & sFlow Monitoring System Resource Pool

Web Server VPN

Domain Controller

Incoming traffic network

VMs DNS

Services and Resources

Fig. 1.

Outgoing traffic network

OpenFlow Switch Router (Incoming & Outgoing traffic Gateways)

G-PLaNE Architecture Design

A. G-PLaNE Basics G-PLaNE currently consists of three sites located at Arizona State University (ASU), University of Missouri–Kansas City (UMKC) in US and Osaka City University (OCU) in Japan. It is designed to provide computing, storage, and networking capabilities for both fixed terminals and mobile devices that usually have limited resources and capabilities. The system component and architecture can be seen in Fig. 1. 1) System Components: We partition the G-PLaNE system into a number of components as follows: Computing Component. Computing capability is the major provisioning service that the majority of resource provisioning platforms provide. We use Xen [25] to maximally utilize the resource pool (physical XenServers) by creating multiple virtual machines. It has been shown that Xen has impressive scalability, reliability, and security over other virtualization technologies (i.e., VMware ESXi, Hyper-V, and KVM) [24]. With virtualization and programmability enabled, G-PLaNE can provide logically separate resources for end users in terms of a general routing suite, OpenFlow switch/router and controllers. A resource pool always has at least one physical node, known as the master. Other physical nodes join the existing pool and are described as slaves. Only the master node exposes an administration interface and will forward commands to individual slaves as necessary. Administrative Component. We also introduce dedicated management and monitoring servers to administrate the VMs and network resources in the resource pool and monitor network traffic within and across domains. NetFlow and sFlow [21] are both enabled to inspect layer-2 and layer3 networking as well as host performance (i.e., CPU and memory utilization). There is also a set of internal functional servers serving different administrative purposes, i.e., Web server, DHCP, DNS, Authentication Server, DB server, VPN, etc. Storage Component. Storage is another major concern for a geographically distributed system since resources will be prepared from a repository where the resource templates are stored. We do not use the local storage in G-PLaNE but chose the Network File System (NFS) to manage storage of

resources. The NFS storage server connects the resource pool via a dedicated storage switch, which greatly increase the scalability. 2) Network Architecture: In the G-PLaNE system, the data plane and the control plane are isolated. The management network (Control Plane) is for management and control traffic (i.e., the traffic of the service request, downloading applications from our repository and so on). On the other hand, the data network is for data traffic among different VMs, or different terminals via VMs. From Fig. 1, there are 4 networks in each cluster. The incoming and outgoing traffic switches isolate the traffic going out of or coming into the G-PLaNE domain. With this design, we can easily control the privilege of resources accessing the Internet, which enhances the security of the resource network environment. The communications between VMs go through the data network. The data network switch is a managed switch with VLAN supported that enables different VMs being in different virtual domains. Additionally, the G-PLaNE management network connects the internal NetFlow and sFlow monitoring system to dynamically monitor the network performance by an administrator. Not only VM-to-VM communication in one physical cluster is considered, but also that of VMs located at different clusters is considered. For this reason, the OpenFlow switch [10] has been introduced to establish the inter-domain data link. To increase efficiency and security, each G-PLaNE server is installed with Open vSwitch [4] with which the traffic between two VMs in the same physical server does not need to go through the physical data network switch so that it is exposed in public. The detail of this dual switch design is further explained in Section II-B. 3) Networking Programmability: Both the OpenFlow switch (OF) and Open vSwitch (OVS) are OpenFlow-based switches. In the OpenFlow architecture, a controller executes all control tasks of the switches and also those used for deploying new networking frameworks, such as new routing protocols or optimized cross layer packet-switching algorithms. With these features, a programable network is established to provide network programmability for Cloud providers. It is feasible to develop a tenant-based policy or protocol to control both internal OVS and external OFS in a virtual networking environment. There are several OFS controllers available following the OpenFlow standard, such as Onix [19], SNAC [9], and NOX [14]. OFS as well as the controller can be easily deployed on VM since they have software based implementations. With the dynamic resource provisioning mechanism supported, users are able to request dedicate private virtual OpenFlow network upon G-PLaNE and develop their own network topology and control mechanism. A virtual network is created from several templates including the software based OpenFlow switch and different controllers pre-installed in VM. A user can easily turn the claimed general virtual network into an OpenFlow-based programmable virtual network by enabling some pre-installed functions. Besides this OpenFlow based switch/control model, there are also all-inone routing suites; for example, Quagga Routing Suite [6]

XenServer n@Cluster2 XenServer 1@Cluster2

XenServer n@Cluster1 XenServer 1@Cluster1

VLAN 1

VLAN 1

VLAN 2

VLAN 2

Open vSwitch

Open vSwitch

GRE Tunnel Physical Managed Switch (VLAN)

Fig. 2.

OpenFlow Switch/ Router (OFS)

OpenFlow Switch/ Router (OFS)

Physical Managed Switch (VLAN)

Intra- & Inter-Domain Network Architecture

can be deployed into the virtual network upon which users can develop their research and experiments. B. Virtual Network Construction To address challenges listed in Section I, we choose a geodistributed architecture to support resource provisioning over multiple clusters. Resources that are mainly in the format of VMs, connected as a network. The resource network can be created by different configurations due to different requirements: 1) a single physical server, 2) multiple servers with one cluster (servers in the same cluster are connected through a layer-2 physical switch), and 3) multiple servers belong to different clusters. 1) Intra-Cluster Network Creation: Intra-Cluster means there is always a native layer-2 connection among all resources within the the same cluster. To create a virtual network within the same cluster, VLAN technology is deployed. As we previously mentioned, it is inefficient to forward packets through the managed switch from one VM to another one in the same physical resource provisioning server. Therefore, each XenServer has an internal Open vSwitch enabled to handle traffic inside the physical server as shown in Fig. 2. Open vSwitch is designed to enable massive network automation through programmatic extensions, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, RSPAN, ERSPAN, CLI, LACP, 802.1ag, etc.). Open vSwitch can operate as a software-based switch running within the hypervisor (Xen Dom 0), in which many security control functions can be implemented. With Open vSwitch enabled, a packet sent from one VM to another one within the same physical server does not need to be exposed out of the physical box. When a virtual network is created within the same cluster but across different physical servers, a packet sent from one VM to another one on a different server should go through the physically managed switch by enabling trunk ports. The virtual network containing multiple VMs in different physical servers is simply created by assigning the same VLAN ID so that it is virtually isolated from other resources. 2) Inter-Cluster Network Creation: To enable provisioning of virtual network across clusters in G-PLaNE, we establish a layer-2 GRE tunnel among each site by deploying OpenFlow switch running on top of NetFPGA box. After a layer-2 tunnel

Internal IP Connection

Layer- 2 Non-IP Connection

FFFFFFFFFFFF



Non-IP

unknown

10.0.0.3 10.0.0.253

United States

ASU, UMKC@US

OCU@Japan Layer-2 GRE Tunnel Connection

Fig. 3.

Japan

MobiCloud Inter-Domain Connection

is established, VLAN can function well upon a layer-2 tunnel since it is a 2.5 layer technology strictly speaking. Although there are some options to establish the layer-2 tunnel, we choose the OpenFlow solution since it is user-centric and can be easily extended due to its programmability. OpenFlow is an open standard that enables researchers to run experimental protocols. In a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device. An OpenFlow switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow switch and controller communicate via the OpenFlow protocol, which defines messages, such as packetreceived, send-packet-out, modify-forwarding-table, get-stats, etc. The data path of an OpenFlow switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). sFlow daemon is enabled to inspect the connection status from the following three aspects as shown in Fig. 3: Internal Connections, External Connections and Non-IP Connections. The left-top part of Fig. 3 is internal connections that represent the internal topology at the ASU site where the connections between the Cloud server master node (10.0.0.3) and storage server (10.0.0.253) can be seen. The top-right part is the layer2 Non-IP connections including unicast and broadcast. The bottom part is the external connections that indicates the GRE tunnel between two sites in the US and one site in Japan. C. Enabling Additional Research Capabilities With the virtual network provisioning service provided, users are able to expand their own development and research work. There are some representative ones listed below: • Routing Algorithm & Protocol Design. Because we provide resources at the IaaS level, users have more flexibility and privilege on their claimed resources. Each virtual machine can be easily turned into a general virtual router or OpenFlow switch by installing a software-based routing suite. Thus, experimenters and researcher can have a real network environment in any possible topology they expect. With the capability of modifying the entire routing protocol or algorithms module, research on routing protocols and other networking layer mechanisms can be investigated, tested and measured in a real networking



environment. Distributed Application Development. G-PLaNE users can use any IP connected device to communicate with their VMs supporting a variety of OS. So it enables developers to develop Cloud based applications on both the Cloud side and the mobile device side. Moreover, after the virtual network resources have been provided, more complicated networking and distributed applications can be developed and tested in a real distributed networking environment. Distributed Security and Privacy Model. There is some research work [22] deploying a Cloud to help users keep from viruses and enhance security. Cloud based anti-virus engines have been emerging and are being studied. If users are able to control a virtual network rather than a single VM or multiple isolated VMs, then an advanced model can be investigated. For example, different antivirus engines can be placed onto different VMs and a centralized control VM running some algorithm to coordinate them, is expected to enhance efficiency to some extent. Generally speaking, any security and privacy research issues can be investigated in a real distributed environment that G-PLaNE provides. III. C ASE S TUDY: S E RV I TR

The G-PLaNE system is designed to be able to support research in a virtual networking environment. In this section, we present a case study on how to utilize the G-PLaNE system to generate and manage virtual domains between geodistributed platforms. A. SeRViTR Architecture Overview A trustworthiness model for future networks called the Virtual Trust Routing and Provisioning Domain (VTRouPD)[15] has recently been proposed. Trustworthiness can be defined in many facets. From the viewpoint of a network, it means routing information must be confidential, secured, and protected, whereas from the service provider’s perspective, trustworthiness of service assures that the service is safe and exclusive of anonymous users. Due to such variety of trustworthiness, a network should be sliced into multiple virtual domains that are isolated from each other. A VTRouPD is constructed by a collection of networking resources including routers and switches based on virtualization techniques; e.g., constructing virtual managed domains through tunneling and VLAN technologies. Within one or multiple VTRouPDs, we can further create flow or application level virtual routing domains that are denoted as µVTRouPDs[15]. In our recent work [20], the intra- and inter-domain policy and trust management between VTRouPDs within the SeRViTR architecture is discussed. Fig.4 presents an overview of the SeRViTR architecture. The figure presents two VTRouPDs that are geo-distributed clusters at two places: Arizona State University and the University of Missouri - Kansas City. Each VTRouPD is a overall domain with the administrative control, and it may contain multiple Virtual Domains. Here, we will only give a very

VTRouPD @ Cluster A (ASU) ForwardingID (VlanID)

VTRouPD @ Cluster B (UMKC) Virtual Domain

Virtual Domain

1 VLAN 1

VLAN 1

VLAN 2

VLAN 2

Flow Controller Flow Controller

2

3

VLAN 3 Flow Controller

VLAN 3

Flow Controller

Host

4 VLAN 4

VLAN 4

Policy Manager

VTRouPD Manager Policy Manager

VTRouPD Manager

XenServer in the resource pool

Fig. 4.

VMs dedicated as SeRViTR functional managers

VM dedicated as a Virtual Router

SeRViTR Architecture Overview

high level description on SeRViTR components along with their relations; their functions in detail can be found in [20]. Packet flows to Virtual Domain s in the SeRViTR architecture are differentiated based on trustworthiness policies, and the Policy Manager plays a key role on setting up policies along with establishing Virtual Domain s. First, it sends policy rules to the VTRouPD Manager to make a request of Virtual Domain creation or deletion. Second, it communicates with the Flow Controller about flow updating. A VTRouPD Manager manages the information of physical routers within the VTRouPD, and it is responsible for the creation or deletion of Virtual Domain as well as resource management. The Flow Controller is placed at the edge of VTRouPD s, and it is in charge of forwarding flows to correct Virtual Domains based on the policy. B. SeRViTR Deployment on G-PLaNE The core idea behind SeRViTR is constructing the network by setting up policies to assure secured routing between virtual domains across multiple sites. To achieve such a goal, SeRViTR has to be deployed in a virtual networking environment that is well supported by the G-PLaNE system. The G-PLaNE system allows the cluster to be scalable. Take the SeRViTR clusters at ASU and UMKC for example, presented in Fig. 4; here, two VTRouPD s are created at different clusters. We have established a layer-2 GRE tunnel between the two VTRouPD s through OpenFlow switches. Within the cluster at each site, the G-PLaNE resource pool contains physical XenServers where VMs are created. Recall that all VMs can be created and deployed as any form of functional entities; thus, we can customize VMs as dedicated SeRViTR functional components as well as virtual routers. Particularly, VTRouPD Manager s, Flow Controller s, and Policy Manager s are implemented on VMs created on one XenServer from the resource pool. The virtual routing domain is a vital constituent part in the SeRViTR architecture design, and it requires good isolation as

well as scalability when constructing virtual networks. With the G-PLaNE system data network switch, which is VLAN supported, VMs can be grouped into different virtual domains by tagging VLAN IDs, and the ones which have been used can be queried through the database of the G-PLaNE system. Fig. 2 shows a high level virtual domain creation by grouping VMs into the distinct VLANs. Particularly, consider cluster A at ASU in Fig. 4, where four XenServers are reserved from the resource pool for creating virtual routers. On each XenServer, we can customize an arbitrary number of VMs as dedicated virtual routers by deploying a routing suit (i.e. OpenFlow switch, Quagga, etc) on it. Now, consider the SeRViTR virtual domains. The PolicyManager-VM sends a request along with a trustworthiness policy to announce the creation of a new virtual domain to the VTRouPDManagerVM. The VTRouPDManager-VM, in turn, communicates with every XenServer that is reserved for creating VMs as virtual routers and sends out an unused VLAN ID. When creating VMs, the XenServer will add this unused VLAN ID through its API. Therefore, VMs (Virtual Routers) created by distinct XenServers can be put into the same VLAN according to the unique VLAN ID. The data network switch will enable the intra- or inter-virtual domain communications. IV. R ELATED W ORK The virtual networking environment has become one of the main focuses for Future Internet research. A recent survey [13] provides a comprehensive summary about network virtualization technologies from multi-aspects. Particularly, it states that a network can be sliced at four different layers. Affiliated to Planetlab[5], VINI[7] supports arbitrary virtual topology creation by establishing EGRE tunnels[11]. VNET[8] established layer-2 tunnels between VMs through virtual LAN (VLAN). [18] realizes network virtualization from a different angle, which creates a mapping from network virtualization to process invoked within virtual machines. Via this analog study, the

authors addressed the design space for network virtualization that requires good isolation and Coexistence of networks. A virtual networking environment with good isolation and scalability is able to well support various research works. Network virtualization in GpENI [3], [23] is discussed in [12]. GENI[2] is a famous open and large-scale virtual laboratory for researchers to collaborate and explore the future networks. [17], which applies a distributed snapshot mechanism to provide fault tolerance, is deployed and demonstrated across multiple sites on GENI infrastructure. With FlowVisor[1], the OpenFlow[10] network is able to support virtual networking environment. Recently, network virtualization has been extended into the Cloud Computing environment, particularly embedded in the distributed networked clusters. In [26], the authors state that creating virtual networks between Cloud provider sites is able to provision an inter-Cloud connection over geo-distributed clusters or multi-domain networks. In previous studies, very few discussed resource provisioning over distributed clusters. V. F UTURE W ORK AND S UMMARY In this paper, we first discussed the challenges as well as the motivation of creating a virtual network in a geodistributed Cloud environment. We design a geo-distributed Cloud resource provisioning system called G-PLaNE. The GPLaNE system is discussed in terms of system components, network architectures, and etc. Virtual network creation, as a major service provided by G-PLaNE, is explained from two perspectives, intra-domain virtual network and an inter-domain virtual network. A concrete example, SeRViTR, is given to illustrate how virtual network construction can help in the current research. This paper posts an efficient and flexible way to construct a virtual network in a geo-distributed Cloud environment to enable more research capabilities. Although G-PLaNE can provide a virtual network across different clusters, there is still additional work to be done: Dynamic VLAN recycling and assignment. Using VLANs is an efficient approach to provide a secure and isolated networking environment for VMs. Currently, we are using a database to manage VLAN resources. However, some communication sessions could be short-term, which require an efficient dynamic VLAN allocation scheme at both the Open vSwitch level and the OpenFlow switch level. This will allow the network configurations to be changed in a dynamic fashion preventing attackers from learning the system weakness and reducing the chances to be attacked. Dynamic VM Migration. Based on a mobile users’ geographic location, its VM can be migrated from its home cluster to a guest cluster for better performance. The VM migrations are operated using the inter-cluster networking system. In this way, VMs’ migrations are protected during the migration procedure, in which the VMs will not be exposed to the public domain. ACKNOWLEDGMENT The presented work is supported by Office of Naval Research (ONR) Young Investigator Program (YIP), HP IRP, US

NSF grants CNS-1029562 and CNS-1029546, and Japan NICT International Collaborative Research Grant. The authors would like to thank Dr. Jeonakeun Lee from HP lab for insightful discussions on network virtualizations. R EFERENCES [1] “FlowVisor,” http://www.openflow.org/wk/index.php/FlowVisor. [2] “GENI,” available at http://www.geni.net/. [3] “GpENI – Great Plains Environment for Network Innovation.” http://www.GpENI.net [4] “Open vSwitch,” http://openvswitch.org/. [5] “PlanetLab,” http://www.planet-lab.org/. [6] “Quagga Routing Suit,” http://quagga.net/. [7] “VINI: Virtual Network Infrastructure,” http://vini-veritas.net/. [8] “Virtuoso: Resource management and prediction for distributed computing using virtual machines,” http://virtuoso.cs.northwestern.edu/. [9] “The SNAC OpenFlow controller,” http://snacsource.org/, 2010. [10] “Open Flow Switch Specification,” Feburary 2011. [11] S. Bhatia, M. Motiwala, W. Muhlbauer, Y. Mundada, V. Valancius, A. Bavier, N. Feamster, L. Peterson, and J. Rexford, “Trellis: a platform for building flexible, fast virtual networks on commodity hardware,” in Proceedings of the 2008 ACM CoNEXT Conference, ser. CoNEXT ’08. New York, NY, USA: ACM, 2008, pp. 72:1–72:6. http://doi.acm.org/10.1145/1544012.1544084 [12] R. Cherukuri, X. Liu, A. Bavier, J. Sterbenz, and D. Medhi, “Network virtualization in GpENI: Framework, implementation and integration experience,” in Proc. of 3rd IEEE/IFIP International Workshop on Management of the Future Internet (ManFI’2011), Dublin, Ireland, May 2011, pp. 1212–1219. [13] N. M. K. Chowdhury and R. Boutaba, “A survey of network virtualization,” Computer Networks, vol. 54, pp. 862–876, 2010. [14] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenkes, “Nox: Towards an operating system for networks,” in ACM SIGCOMM Computer Communication Review, July 2008. [15] D. Huang, S. Ata, and D. Medhi, “Establishing secure virtual trust routing and provisioning domains for future internet,” in Proc. of IEEE Globecom 2010 Conference (Next Generation Networking Symposium), 2010. [16] D. Huang, “MobiCloud: A Secure Mobile Cloud Computing Platform,” E-Letter of Multimedia Communications Technical Committee (MMTC), IEEE Communications Society (invited paper), 2011. [17] A. Kangarlou, D. Xu, C. K. Ulas, P. Padala, B. Lantz, and K. Igarashi, “In-network live snapshot service for recovering virtual infrastructures,” Network, IEEE, vol. 25 Issue:4, pp. 12 – 19, 2011. [18] A. Khan, A. Zugenmaier, D. Jurca, and W. Kellerer, “Network virtualization: A hypervisor for the internet?” Communications Magazine, IEEE, vol. 50 , Issue:1, pp. 136 – 143, 2012. [19] T. Koponen, M. Casado, N. Gude, J. Stribling, P. L., M. Zhu, R. Ramanathan, Y. Iwata, H. Inouye, T. Hama, and S. Shenker, “Onix: A distributed control platform for large-scale production networks,” 2010. [20] X. Liu, A. Wada, T. Xing, P. Juluri, Y. Sato, S. Ata, D. Huang, and D. Medhi, “SeRViTR: A framework for trust and policy management for a secure Internet and its proof-of-concept implementation,” in Proc. of 4th IEEE/IFIP International Workshop on Management of the Future Internet (ManFI’2012), Maui, Hawaii, April 2012. [21] Network Instruments, “Extending network visibility by leveraging NetFlow and sFlow technologies,” February 2011, White Paper. [22] J. Oberheide, E. Cooke, and F. Jahanian, “CloudAV: N-Version Antivirus in the Network Cloud,” in 17th USENIX Security Symposium, 2009. [23] J. Sterbenz, D. Medhi, B. Ramamurthy, C. Scoglio, D. Hutchison, B. Plattner, T. Anjali, A. Scott, C. Buffington, G. Monaco, D. Gruenbacher, R. McMullen, J. Rohrer, J. Sherrell, P. Angu, R. Cherukuri, H. Qian, and N. Tare, “The Great Plains Environment for Network Innovation (GpENI): A programmable testbed for future Internet architecture research,” in Proc. of 6th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (TridentCom), Berlin, Germany, May 2010, pp. 428–441. [24] Xen Community, “Why Xen?” [25] xen.org, “Xen Hypervisor,” http://www.xen.org/. [26] Y. Xin, I. Baldine, A. Mandal, C. Heermann, J. Chase, and A. Yumerefendi, “Embedding virtual topologies in networked clouds,” in CFI ’11 Proceedings of the 6th International Conference on Future Internet Technologies, 2011.