vSphere Distributed Switch Best Practices

4 downloads 3963 Views 29MB Size Report
This paper provides best practice guidelines for deploying the VMware vSphere® ... discusses some standard best practices for configuring VDS features.
VMware vSphere Distributed Switch Best Practices ®

TEC H N I C A L W H ITE PA P E R

VMware vSphere Distributed Switch Best Practices

Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Infrastructure Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Infrastructure Component Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Virtual Infrastructure Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Example Deployment Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 VMware vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Network Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Virtual Infrastructure Traffic Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Important Virtual and Physical Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 VDS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Host Uplink Connections (vmnics) and dvuplink Parameters . . . . . . . . . . . . . . . . . . . .8 Traffic Types and dvportgroup Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 dvportgroup Specific Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 NIOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Bidirectional Traffic Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Physical Network Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Link Aggregation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Link-State Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Maximum Transmission Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Rack Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Rack Server with Eight 1GbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Design Option 1 – Static Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 dvuplink Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 dvportgroup Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Physical Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Design Option 2 – Dynamic Configuration with NIOC and LBT . . . . . . . . . . . . . . . . . 17 dvportgroup Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

TECH N I C AL WH ITE PAPE R / 2

VMware vSphere Distributed Switch Best Practices

Rack Server with Two 10GbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Design Option 1 – Static Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 dvuplink Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 dvportgroup Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 Physical Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Design Option 2 – Dynamic Configuration with NIOC and LBT . . . . . . . . . . . . . . . . .23 dvportgroup Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Blade Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Blade Server with Two 10GbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Design Option 1 – Static Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Design Option 2 – Dynamic Configuration with NIOC and LBT . . . . . . . . . . . . . . . . .26 Blade Server with Hardware-Assisted Logical Network Adaptors (HP Flex-10– or Cisco UCS–like Deployment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Operational Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 VMware vSphere Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 VMware vSphere API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Virtual Network Monitoring and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 vCenter Server on a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

TECH N I C AL WH ITE PAPE R / 3

VMware vSphere Distributed Switch Best Practices

Introduction This paper provides best practice guidelines for deploying the VMware vSphere® distributed switch (VDS) in a vSphere environment. The advanced capabilities of VDS provide network administrators with more control of and visibility into their virtual network infrastructure. This document covers the different considerations that vSphere and network administrators must take into account when designing the network with VDS. It also discusses some standard best practices for configuring VDS features. The paper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, different VDS design approaches are explained. The deployments and design approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure. It is important to note that customers are not limited to the design options described in this paper. The flexibility of the vSphere platform allows for multiple variations in the design options that can fulfill an individual customer’s unique network infrastructure needs. This document is intended for vSphere and network administrators interested in understanding and deploying VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer to the What’s New in Networking paper: http://www.vmware.com/resources/techresources/10194. Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through this document. The following link provides technical resources for virtual networking concepts: http://www.vmware.com/technical-resources/virtual-networking/resources.html For physical networking concepts, readers should refer to any physical network switch vendor’s documentation.

Design Considerations The following three main aspects influence the design of a virtual network infrastructure: 1) Customer’s infrastructure design goals 2) Customer’s infrastructure component configurations 3) Virtual infrastructure traffic requirements Let’s take a look at each of these aspects in a little more detail.

Infrastructure Design Goals Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform efficiently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized environment, these requirements become increasingly demanding as growing numbers of business-critical applications run in a consolidated setting. These requirements on the infrastructure translate into design decisions that should incorporate the following best practices for a virtual network infrastructure: đƫ2+% ƫ*5ƫ/%*#(!ƫ,+%*0ƫ+"ƫ"%(1.!ƫ%*ƫ0$!ƫ*!03+.'ċ đƫ /+(0!ƫ!$ƫ0.þƫ05,!ƫ"+.ƫ%*.!/! ƫ.!/%(%!*5ƫ* ƫ/!1.%05ċ đƫ '!ƫ1/!ƫ+"ƫ0.þƫ)*#!)!*0ƫ* ƫ+,0%)%60%+*ƫ,%(%0%!/ċ

TECH N I C AL WH ITE PAPE R / 4

VMware vSphere Distributed Switch Best Practices

Infrastructure Component Configurations In every customer environment, the utilized compute and network infrastructures differ in terms of configuration, capacity and feature capabilities. These different infrastructure component configurations influence the virtual network infrastructure design decisions. The following are some of the configurations and features that administrators must look out for: đƫ!.2!.ƫ+*ü#1.0%+*čƫ.'ƫ+.ƫ( !ƫ/!.2!./ đƫ!03+.'ƫ ,0+.ƫ+*ü#1.0%+*čƫāƫ+.ƫāĀƫ*!03+.'ƫ ,0+./Ďƫ*1)!.ƫ+"ƫ2%((!ƫ ,0+./Ďƫ offload function on these adaptors, if any đƫ$5/%(ƫ*!03+.'ƫ/3%0$ƫ%*"./0.101.!ƫ,%(%0%!/čƫ/3%0$ƫ(1/0!.%*#ƫ It is impossible to cover all the different virtual network infrastructure design deployments based on the various combinations of type of servers, network adaptors and network switch capability parameters. In this paper, the following four commonly used deployments that are based on standard rack server and blade server configurations are described: đƫ'ƫ/!.2!.ƫ3%0$ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ƫ đƫ'ƫ/!.2!.ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ đƫ( !ƫ/!.2!.ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ đƫ( !ƫ/!.2!.ƫ3%0$ƫ$. 3.!ġ//%/0! ƫ)1(0%,(!ƫ(+#%(ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability, redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity to the server infrastructure.

Virtual Infrastructure Traffic vSphere virtual network infrastructure carries different traffic types. To manage the virtual infrastructure traffic effectively, vSphere and network administrators must understand the different traffic types and their characteristics. The following are the key traffic types that flow in the vSphere infrastructure, along with their traffic characteristics: đƫ *#!)!*0ƫ0.þčƫ$%/ƫ0.þƫý+3/ƫ0$.+1#$ƫƫ2)'*%ƫ* ƫ..%!/ƫ 3.!ƫ%Ĵƫ$+/0Ģ0+Ģ 3.!ƫ2!*0!.Ĵƫ +*ü#1.0%+*ƫ* ƫ)*#!)!*0ƫ+))1*%0%+*ƫ/ƫ3!((ƫ/ƫ%ƫ$+/0Ģ0+Ģ%ƫ$+/0ƫ$%#$ƫ2%(%(%05ƫĨĩĢ related communication. This traffic has low network utilization but has very high availability and security requirements. đƫ 3.!ƫ2,$!.!IJƫ2 +0%+*IJƫ0.þčƫ%0$ƫ 2*!)!*0ƫ%*ƫ2 +0%+*ƫ0!$*+(+#5Čƫƫ/%*#(!ƫ2 +0%+*ƫ%*/0*!ƫ*ƫ +*/1)!ƫ()+/0ƫƫ"1((ƫāĀƫ* 3% 0$ċƫƫ)4%)1)ƫ+"ƫ!%#$0ƫ/%)1(0*!+1/ƫ2 +0%+*ƫ%*/0*!/ƫ*ƫ!ƫ,!."+.)! ƫ +*ƫƫāĀƫ1,(%*'Ďƫ"+1.ƫ/%)1(0*!+1/ƫ2 +0%+*ƫ%*/0*!/ƫ.!ƫ((+3! ƫ+*ƫƫāƫ1,(%*'ċƫ2 +0%+*ƫ0.þƫ$/ƫ2!.5ƫ high network utilization and can be bursty at times. Customers must make sure that vMotion traffic doesn’t %),0ƫ+0$!.ƫ0.þƫ05,!/Čƫ!1/!ƫ%0ƫ)%#$0ƫ+*/1)!ƫ((ƫ2%((!ƫ ĥƫ.!/+1.!/ċƫ*+0$!.ƫ,.+,!.05ƫ+"ƫ2 +0%+*ƫ traffic is that it is not sensitive to throttling and makes a very good candidate on which to perform traffic management. đƫ1(0ġ0+(!.*0ƫ0.þčƫ$!*ƫ 3.!ƫ1(0ƫ+(!.*!ƫĨĩƫ(+##%*#ƫ%/ƫ!*(! ƫ"+.ƫƫ2%.01(ƫ)$%*!Čƫ((ƫ0$!ƫ logging traffic is sent to the secondary fault-tolerant virtual machine over a designated vmknic port. This process can require a considerable amount of bandwidth at low latency because it replicates the I/O traffic and memory-state information to the secondary virtual machine. đƫ% ĥƫ0.þčƫ ƫ/0+.#!ƫ0.þƫ%/ƫ..%! ƫ+2!.ƫ2)'*%ƫ,+.0/ċƫ$%/ƫ0.þƫ2.%!/ƫ+. %*#ƫ0+ƫ %/'ƫ ĥƫ .!-1!/0/ċƫ%0$ƫ!* ġ0+ġ!* ƫ&1)+ƫ".)!ƫ+*ü#1.0%+*Čƫ)+.!ƫ 0ƫ%/ƫ0.*/"!..! ƫ3%0$ƫ!$ƫ0$!.*!0ƫ".)!Čƫ decreasing the number of frames on the network. This larger frame reduces the overhead on servers/targets * ƫ%),.+2!/ƫ0$!ƫ ƫ/0+.#!ƫ,!."+.)*!ċƫ*ƫ0$!ƫ+0$!.ƫ$* Čƫ+*#!/0! ƫ* ƫ(+3!.ġ/,!! ƫ*!03+.'/ƫ*ƫ1/!ƫ (0!*5ƫ%//1!/ƫ0$0ƫ %/.1,0ƫ!//ƫ0+ƫ ƫ/0+.#!ċƫ 0ƫ%/ƫ.!+))!* ! ƫ0$0ƫ1/!./ƫ,.+2% !ƫƫ$%#$ġ/,!! ƫ,0$ƫ"+.ƫ ƫ/0+.#!ƫ* ƫ2+% ƫ*5ƫ+*#!/0%+*ƫ%*ƫ0$!ƫ*!03+.'ƫ%*"./0.101.!ċ

TECH N I C AL WH ITE PAPE R / 5

VMware vSphere Distributed Switch Best Practices

đƫ%.01(ƫ)$%*!ƫ0.þčƫ!,!* %*#ƫ+*ƫ0$!ƫ3+.'(+ /ƫ0$0ƫ.!ƫ.1**%*#ƫ+*ƫ0$!ƫ#1!/0ƫ2%.01(ƫ)$%*!Čƫ0$!ƫ0.þƫ patterns will vary from low to high network utilization. Some of the applications running in virtual machines )%#$0ƫ!ƫ(0!*5ƫ/!*/%0%2!ƫ/ƫ%/ƫ0$!ƫ/!ƫ3%0$ƫ ƫ3+.'(+ /ċ Table 1 summarizes the characteristics of each traffic type. TRAFFIC TYPE

B A N D W I DT H U S AG E

OT H E R T R A F F I C REQUIREMENTS

M A N AG E M E N T

Low

%#$(5ƫ.!(%(!ƫ* ƫ/!1.!ƫ$**!(

v M OT I O N

%#$

Isolated channel

FT

Medium to high

%#$(5ƫ.!(%(!Čƫ(+3ġ(0!*5ƫ$**!(

ISCSI

%#$

Reliable, high-speed channel

V I R T UA L M AC H I N E

Depends on application

Depends on application

Table 1. Traffic Types and Characteristics

To understand the different traffic flows in the physical network infrastructure, network administrators use network traffic management tools. These tools help monitor the physical infrastructure traffic but do not provide 2%/%%(%05ƫ%*0+ƫ2%.01(ƫ%*"./0.101.!ƫ0.þċƫ%0$ƫ0$!ƫ.!(!/!ƫ+"ƫ2,$!.!ƫĆČƫƫ*+3ƫ/1,,+.0/ƫ0$!ƫ!0(+3ƫ"!01.!Čƫ 3$%$ƫ!*(!/ƫ!4,+.0%*#ƫ0$!ƫ%*0!.*(ƫĨ2%.01(ƫ)$%*!Ģ0+Ģ2%.01(ƫ)$%*!ĩƫ2%.01(ƫ%*"./0.101.!ƫý+3ƫ%*"+.)0%+*ƫ 0+ƫ/0* . ƫ*!03+.'ƫ)*#!)!*0ƫ0++(/ċƫ )%*%/0.0+./ƫ*+3ƫ$2!ƫ0$!ƫ.!-1%.! ƫ2%/%%(%05ƫ%*0+ƫ2%.01(ƫ%*"./0.101.!ƫ traffic. This helps administrators monitor the virtual network infrastructure traffic through a familiar set of network management tools. Customers should make use of the network data collected from these tools during the capacity planning or network design exercises.

Example Deployment Components "0!.ƫ(++'%*#ƫ0ƫ0$!ƫ %û!.!*0ƫ !/%#*ƫ+*/% !.0%+*/Čƫ0$%/ƫ/!0%+*ƫ,.+2% !/ƫƫ(%/0ƫ+"ƫ+),+*!*0/ƫ0$0ƫ.!ƫ1/! ƫ in an example deployment. This example deployment helps illustrate some standard VDS design approaches. The following are some common components in the virtual infrastructure. The list doesn’t include storage components that are required to build the virtual infrastructure. It is assumed that customers will deploy ƫ/0+.#!ƫ%*ƫ0$%/ƫ!4),(!ƫ !,(+5)!*0ċ

Hosts +1.ƫ%ƫ$+/0/ƫ,.+2% !ƫ+),10!Čƫ)!)+.5ƫ* ƫ*!03+.'ƫ.!/+1.!/ƫ+. %*#ƫ0+ƫ0$!ƫ+*ü#1.0%+*ƫ+"ƫ0$!ƫ hardware. Customers can have different numbers of hosts in their environment, based on their needs. One VDS can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to build a private or public cloud environment using VDS.

Clusters ƫ(1/0!.ƫ%/ƫƫ+((!0%+*ƫ+"ƫ%ƫ$+/0/ƫ* ƫ//+%0! ƫ2%.01(ƫ)$%*!/ƫ3%0$ƫ/$.! ƫ.!/+1.!/ċƫ1/0+)!./ƫ*ƫ have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers have the flexibility of deploying multiple clusters with a different number of hosts in each cluster. For simple illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster can have a maximum of 32 hosts.

TECH N I C AL WH ITE PAPE R / 6

VMware vSphere Distributed Switch Best Practices

VMware vCenter Server  3.!ƫ2!*0!.ƫ!.2!.Ĵƫ!*0.((5ƫ)*#!/ƫƫ2,$!.!ƫ!*2%.+*)!*0ċƫ1/0+)!./ƫ*ƫ)*#!ƫƫ0$.+1#$ƫ this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter Server system is not shown in the diagrams, but customers should assume that it is present in this example deployment. It is used only to provision and manage VDS configuration. When provisioned, hosts and virtual )$%*!ƫ*!03+.'/ƫ+,!.0!ƫ%* !,!* !*0(5ƫ+"ƫ2!*0!.ƫ!.2!.ċƫ((ƫ+),+*!*0/ƫ.!-1%.! ƫ"+.ƫ*!03+.'ƫ/3%0$%*#ƫ .!/% !ƫ+*ƫ%ƫ$+/0/ċƫ2!*ƫ%"ƫ0$!ƫ2!*0!.ƫ!.2!.ƫ/5/0!)ƫ"%(/Čƫ0$!ƫ$+/0/ƫ* ƫ2%.01(ƫ)$%*!/ƫ3%((ƫ/0%((ƫ!ƫ(!ƫ to communicate.

Network Infrastructure $5/%(ƫ*!03+.'ƫ/3%0$!/ƫ%*ƫ0$!ƫ!//ƫ* ƫ##.!#0%+*ƫ(5!.ƫ,.+2% !ƫ+**!0%2%05ƫ!03!!*ƫ%ƫ$+/0/ƫ* ƫ0+ƫ the external world. These network infrastructure components support standard layer 2 protocols providing secure and reliable connectivity. (+*#ƫ3%0$ƫ0$!ƫ,.!! %*#ƫ"+1.ƫ+),+*!*0/ƫ+"ƫ0$!ƫ,$5/%(ƫ%*"./0.101.!ƫ%*ƫ0$%/ƫ!4),(!ƫ !,(+5)!*0Čƫ/+)!ƫ+"ƫ the virtual infrastructure traffic types are also considered during the design. The following section describes the different traffic types in the example deployment.

Virtual Infrastructure Traffic Types In this example deployment, there are standard infrastructure traffic types, including iSCSI, vMotion, FT, management and virtual machine. Customers might have other traffic types in their environment, based on their $+%!ƫ+"ƫ/0+.#!ƫ%*"./0.101.!ƫĨČƫČƫ+ĩċƫ%#1.!ƫāƫ/$+3/ƫ0$!ƫ %ûƫ!.!*0ƫ0.þ ƫƫƫ05,!/ƫ(+*#ƫ3%0$ƫ//+%0! ƫ ,+.0ƫ#.+1,/ƫ+*ƫ*ƫ%ƫ$+/0ċƫ 0ƫ(/+ƫ/$+3/ƫ0$!ƫ),,%*#ƫ+"ƫ0$!ƫ*!03+.'ƫ ,0+./ƫ0+ƫ0$!ƫ %ûƫ!.!*0ƫ,+.0ƫ#.+1,/ċ

VM

iSCSI Traffic vmk1

FT Traffic vmk2

Mgmt Traffic vmk3

vMotion Traffic vmk4

PG-A

PG-B

PG-C

PG-D

PG-E

VDS

ESXi

Host

Figure 1. Different Traffi c Types Running on a Host

TECH N I C AL WH ITE PAPE R / 7

VMware vSphere Distributed Switch Best Practices

Important Virtual and Physical Switch Parameters !"+.!ƫ#+%*#ƫ%*0+ƫ0$!ƫ %ûƫ!.!*0ƫ !/%#*ƫ+,0%+*/ƫ%*ƫ0$!ƫ!4),(!ƫ !,(+5)!*0Čƫ(!0Ě/ƫ0'!ƫƫ(++'ƫ0ƫ0$!ƫ2%.01(ƫ* ƫ physical network switch parameters that should be considered in all of the design options. These are some key parameters that vSphere and network administrators must take into account when designing VMware virtual *!03+.'%*#ċƫ!1/!ƫ0$!ƫ+*üƫ#1.0%+*ƫ+"ƫ2%.01(ƫ*!03+.'%*#ƫ#+!/ƫ$* ƫ%*ƫ$* ƫ3%0$ƫ,$5/%(ƫ*!03+.'ƫ configuration, this section will cover both the virtual and physical switch parameters.

VDS Parameters VDS simplifies the challenges of the configuration process by providing one single pane of glass to perform 2%.01(ƫ*!03+.'ƫ)*#!)!*0ƫ0/'/ċƫ/ƫ+,,+/! ƫ0+ƫ+*üƫ#1.%*#ƫƫ2,$!.!ƫ/0* . ƫ/3%0$ƫĨĩƫ+*ƫ!$ƫ %* %2% 1(ƫ$+/0Čƫ )%*%/0.0+./ƫ*ƫ+*üƫ#1.!ƫ* ƫ)*#!ƫ+*!ƫ/%*#(!ƫċƫ((ƫ!*0.((5ƫ+*üƫ#1.! ƫ*!03+.'ƫ policies on VDS get pushed down to the host automatically when the host is added to the distributed switch. In this section, an overview of key VDS parameters is provided. Host Uplink Connections (vmnics) and dvuplink Parameters ƫ$/ƫƫ*!3ƫ/0.0%+*Čƫ((! ƫ 21,(%*'Čƫ"+.ƫ0$!ƫ,$5/%(ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫĨ2)*%/ĩƫ+*ƫ!$ƫ$+/0ċƫ It is defined during the creation of the VDS and can be considered as a template for individual vmnics on each $+/0ċƫ((ƫ0$!ƫ,.+,!.0%!/ģ%*(1 %*#ƫ*!03+.'ƫ ,0+.ƫ0!)%*#Čƫ(+ ƫ(*%*#ƫ* ƫ"%(+2!.ƫ,+(%%!/ƫ+*ƫƫ* ƫ 2,+.0#.+1,/ģ.!ƫ+*üƫ#1.! ƫ+*ƫ 21,(%*'/ċƫ$!/!ƫ 21,(%*'ƫ,.+,!.0%!/ƫ.!ƫ10+)0%((5ƫ,,(%! ƫ0+ƫ2)*%/ƫ+*ƫ individual hosts when a host is added to the VDS and when each vmnic on the host is mapped to a dvuplink. This dvuplink abstraction therefore provides the advantage of consistently applying teaming and failover +*üƫ#1.0%+*/ƫ0+ƫ((ƫ0$!ƫ$+/0Ě/ƫ,$5/%(ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫĨ2)*%/ĩċ %#1.!ƫĂƫ/$+3/ƫ03+ƫ%ƫ$+/0/ƫ3%0$ƫ"+1.ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ!$ċƫ$!*ƫ0$!/!ƫ$+/0/ƫ.!ƫ ! ƫ0+ƫ0$!ƫ VDS, with four dvuplinks configured on a dvuplink port group, administrators must assign the network adaptors (vmnics) of the hosts to dvuplinks. To illustrate the mapping of the dvuplinks to vmnics, Figure 2 shows one type +"ƫ),,%*#ƫ3$!.!ƫ%ƫ$+/0ƫ2)*%Āƫ%/ƫ),,! ƫ0+ƫ 21,(%*'āČƫ2)*%āƫ0+ƫ 21,(%*'ĂČƫ* ƫ/+ƫ+*ċƫ1/0+)!./ƫ*ƫ$++/!ƫ different mapping, if required, where vmnic0 can be mapped to a different dvuplink instead of dvuplink1. VMware recommends having consistent mapping across different hosts because it reduces complexity in the environment.

ESXi Host1

ESXi Host2 Legend:

VM

VM

VM

VM

VM

VM

VM

VM

PG-A PG-B

vSphere Distributed Switch

dvuplink1

dvuplink Port Group dvuplink

dvuplink4

vmnic0 vmnic1 vmnic2 vmnic3

vmnic0 vmnic1 vmnic2 vmnic3

Figure 2. dvuplink-to-vmnic Mapping

TECH N I C AL WH ITE PAPE R / 8

VMware vSphere Distributed Switch Best Practices

/ƫƫ!/0ƫ,.0%!Čƫ1/0+)!./ƫ/$+1( ƫ(/+ƫ0.5ƫ0+ƫ !,(+5ƫ$+/0/ƫ3%0$ƫ0$!ƫ/)!ƫ*1)!.ƫ+"ƫ,$5/%(ƫ0$!.*!0ƫ *!03+.'ƫ ,0+./ƫ* ƫ3%0$ƫ/%)%(.ƫ,+.0ƫ/,!! /ċƫ(/+Čƫ!1/!ƫ0$!ƫ*1)!.ƫ+"ƫ 21,(%*'/ƫ+*ƫƫ !,!* /ƫ+*ƫ0$!ƫ )4%)1)ƫ*1)!.ƫ+"ƫ,$5/%(ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ+*ƫƫ$+/0Čƫ )%*%/0.0+./ƫ/$+1( ƫ0'!ƫ0$0ƫ%*0+ƫ+1*0ƫ during dvuplink port group configuration. Customers always have an option to modify this dvuplink configuration based on the new hardware capabilities. Traffic Types and dvportgroup Parameters Similar to port groups on standard switches, dvportgroups define how the connection is made through the VDS 0+ƫ0$!ƫ*!03+.'ċƫ$!ƫ ƫ Čƫ0.þƫ/$,%*#Čƫ,+.0ƫ/!1.%05Čƫ0!)%*#ƫ* ƫ(+ ƫ(*%*#ƫ,.)!0!./ƫ.!ƫ configured on these dvportgroups. The virtual ports (dvports) connected to a dvportgroup share the same properties configured on a dvportgroup. When customers want a group of virtual machines to share the security and teaming policies, they must make sure that the virtual machines are part of one dvportgroup. Customers can choose to define different dvportgroups based on the different traffic types they have in their environment or based on the different tenants or applications they support in the environment. If desired, multiple 2,+.0#.+1,/ƫ*ƫ/$.!ƫ0$!ƫ/)!ƫ ƫ ċ In this example deployment, the dvportgroup classification is based on the traffic types running in the virtual %*"./0.101.!ċƫ"0!.ƫ )%*%/0.0+./ƫ1* !./0* ƫ0$!ƫ %û!.!*0ƫ0.þƫ05,!/ƫ%*ƫ0$!ƫ2%.01(ƫ%*"./0.101.!ƫ* ƫ% !*0%"5ƫ specific security, reliability and performance requirements for individual traffic types, the next step is to create 1*%-1!ƫ 2,+.0#.+1,/ƫ//+%0! ƫ3%0$ƫ!$ƫ0.þƫ05,!ċƫ/ƫ3/ƫ,.!2%+1/(5ƫ)!*0%+*! Čƫ0$!ƫ 2,+.0#.+1,ƫ configuration defined at VDS level is automatically pushed down to every host that is added to the VDS. For !4),(!Čƫ%*ƫ%#1.!ƫĂČƫ0$!ƫ03+ƫ 2,+.0#.+1,/ČƫġƫĨ5!((+3ĩƫ* ƫġƫĨ#.!!*ĩČƫ !ü*! ƫ0ƫ0$!ƫ %/0.%10! ƫ/3%0$ƫ (!2!(ƫ.!ƫ2%((!ƫ+*ƫ!$ƫ+"ƫ0$!ƫ%ƫ$+/0/ƫ0$0ƫ.!ƫ,.0ƫ+"ƫ0$0ƫċ dvportgroup Specific Configuration "0!.ƫ1/0+)!./ƫ !% !ƫ+*ƫ0$!ƫ*1)!.ƫ+"ƫ1*%-1!ƫ 2,+.0#.+1,/ƫ0$!5ƫ3*0ƫ0+ƫ.!0!ƫ%*ƫ0$!%.ƫ!*2%.+*)!*0Čƫ0$!5ƫ can start configuring them. The configuration options/parameters are similar to those available with port groups on vSphere standard switches. There are some additional options available on VDS dvportgroups that are related to teaming setup and are not available on vSphere standard switches. Customers can configure the following key parameters for each dvportgroup. đƫ1)!.ƫ+"ƫ2%.01(ƫ,+.0/ƫĨ 2,+.0/ĩ đƫ+.0ƫ%* %*#ƫĨ/00%Čƫ 5*)%Čƫ!,$!)!.(ĩ đƫ ƫ0.1*'%*#ĥ,.%20!ƫ / đƫ!)%*#ƫ* ƫ(+ ƫ(*%*#ƫ(+*#ƫ3%0$ƫ0%2!ƫ* ƫ/0* 5ƫ(%*'/ đƫ% %.!0%+*(ƫ0.þġ/$,%*#ƫ,.)!0!./ đƫ+.0ƫ/!1.%05 /ƫ,.0ƫ+"ƫ0$!ƫ0!)%*#ƫ(#+.%0$)ƫ/1,,+.0Čƫƫ,.+2% !/ƫƫ1*%-1!ƫ,,.+$ƫ0+ƫ(+ ƫ(*%*#ƫ0.þƫ.+//ƫ0$!ƫ 0!)! ƫ*!03+.'ƫ ,0+./ċƫ$%/ƫ,,.+$ƫ%/ƫ((! ƫ(+ ġ/! ƫ0!)%*#ƫĨ ĩČƫ3$%$ƫ %/0.%10!/ƫ0$!ƫ0.þƫ.+//ƫ 0$!ƫ*!03+.'ƫ ,0+./ƫ/! ƫ+*ƫ0$!ƫ,!.!*0#!ƫ10%(%60%+*ƫ+"ƫ0.þƫ+*ƫ0$+/!ƫ ,0+./ċƫ ƫ(#+.%0$)ƫ3+.'/ƫ+*ƫ both ingress and egress direction of the network adaptor traffic, as opposed to the hashing algorithms that work +*(5ƫ%*ƫ!#.!//ƫ %.!0%+*ƫĨ0.þƫý+3%*#ƫ+10ƫ+"ƫ0$!ƫ*!03+.'ƫ ,0+.ĩċƫ(/+Čƫ ƫ,.!2!*0/ƫ0$!ƫ3+./0ġ/!ƫ/!*.%+ƫ that might happen with hashing algorithms, where all traffic hashes to one network adaptor of the team while other network adaptors are not used to carry any traffic. To improve the utilization of all the links/network  ,0+./Čƫ 3.!ƫ.!+))!* /ƫ0$!ƫ1/!ƫ+"ƫ0$%/ƫ 2*! ƫ"!01.!Čƫ Čƫ+"ƫċƫ$!ƫ ƫ,,.+$ƫ%/ƫ.!+))!* ! ƫ +2!.ƫ0$!ƫ0$!.$**!(ƫ+*ƫ,$5/%(ƫ/3%0$!/ƫ* ƫ.+10!ġ/! ƫ ƫ$/$ƫ+*ü#1.0%+*ƫ+*ƫ0$!ƫ2%.01(ƫ/3%0$ċ

TECH N I C AL WH ITE PAPE R / 9

VMware vSphere Distributed Switch Best Practices

+.0ƫ/!1.%05ƫ,+(%%!/ƫ0ƫ,+.0ƫ#.+1,ƫ(!2!(ƫ!*(!ƫ1/0+)!.ƫ,.+0!0%+*ƫ".+)ƫ!.0%*ƫ0%2%05ƫ0$0ƫ)%#$0ƫ compromise security. For example, a hacker might impersonate a virtual machine and gain unauthorized access 5ƫ/,++ü*#ƫ0$!ƫ2%.01(ƫ)$%*!Ě/ƫ ƫ .!//ċƫ 3.!ƫ.!+))!* /ƫ/!00%*#ƫ0$!ƫ ƫ .!//ƫė$*#!/Ęƫ* ƫ ė+.#! ƫ.*/)%0/Ęƫ0+ƫė!&!0Ęƫ0+ƫ$!(,ƫ,.+0!0ƫ#%*/0ƫ00'/ƫ(1*$! ƫ5ƫƫ.+#1!ƫ#1!/0ƫ+,!.0%*#ƫ/5/0!)ċƫ 1/0+)!./ƫ/$+1( ƫ/!0ƫ0$!ƫė.+)%/1+1/ƫ + !Ęƫ0+ƫė!&!0Ęƫ1*(!//ƫ0$!5ƫ3*0ƫ0+ƫ)+*%0+.ƫ0$!ƫ0.þƫ"+.ƫ*!03+.'ƫ troubleshooting or intrusion detection purposes. NIOC !03+.'ƫ ĥƫ+*0.+(ƫĨ ĩƫ%/ƫ0$!ƫ0.þƫ)*#!)!*0ƫ,%(%05ƫ2%((!ƫ+*ƫċƫ$!ƫ ƫ+*!,0ƫ.!2+(2!/ƫ .+1* ƫ.!/+1.!ƫ,++(/ƫ0$0ƫ.!ƫ/%)%(.ƫ%*ƫ)*5ƫ35/ƫ0+ƫ0$!ƫ+*!/ƫ!4%/0%*#ƫ"+.ƫƫ* ƫ)!)+.5ċƫ2,$!.!ƫ* ƫ *!03+.'ƫ )%*%/0.0+./ƫ*+3ƫ*ƫ((+0!ƫ ĥƫ/$.!/ƫ0+ƫ %û!.!*0ƫ0.þƫ05,!/ƫ/%)%(.(5ƫ0+ƫ((+0%*#ƫƫ* ƫ memory resources to a virtual machine. The share parameter specifies the relative importance of a traffic type over other traffic and provides a guaranteed minimum when the other traffic competes for a particular network adaptor. The shares are specified in abstract units numbered 1 to 100. Customers can provision shares to different traffic types based on the amount of resources each traffic type requires. This capability of provisioning I/O resources is very useful in situations where there are multiple traffic types competing for resources. For example, in a deployment where vMotion and virtual machine traffic types are flowing through one network adaptor, it is possible that vMotion activity might impact the virtual machine 0.þƫ,!."+.)*!ċƫ *ƫ0$%/ƫ/%010%+*Čƫ/$.!/ƫ+*ü#1.! ƫ%*ƫ ƫ,.+2% !ƫ0$!ƫ.!-1%.! ƫ%/+(0%+*ƫ0+ƫ0$!ƫ2 +0%+*ƫ * ƫ2%.01(ƫ)$%*!ƫ0.þƫ05,!ƫ* ƫ,.!2!*0ƫ+*!ƫý+3ƫĨ0.þƫ05,!ĩƫ".+)ƫ +)%*0%*#ƫ0$!ƫ+0$!.ƫý+3ċƫ ƫ configuration provides one more parameter that customers can utilize if they want to put any limits on a particular 0.þƫ05,!ċƫ$%/ƫ,.)!0!.ƫ%/ƫ((! ƫė0$!ƫ(%)%0ċĘƫ$!ƫ(%)%0ƫ+*ü#1.0%+*ƫ/,!%ü!/ƫ0$!ƫ/+(10!ƫ)4%)1)ƫ* 3% 0$ƫ "+.ƫƫ0.þƫ05,!ƫ+*ƫƫ$+/0ċƫ$!ƫ+*ü#1.0%+*ƫ+"ƫ0$!ƫ(%)%0ƫ,.)!0!.ƫ%/ƫ/,!%ü! ƫ%*ƫ ,/ċƫ ƫ(%)%0/ƫ* ƫ/$.!/ƫ ,.)!0!./ƫ3+.'ƫ+*(5ƫ+*ƫ0$!ƫ+10+1* ƫ0.þČƫ%ċ!ċČƫ0.þƫ0$0ƫ%/ƫý+3%*#ƫ+10ƫ+"ƫ0$!ƫ%ƫ$+/0ċ VMware recommends that customers utilize this traffic management feature whenever they have multiple traffic 05,!/ƫý+3%*#ƫ0$.+1#$ƫ+*!ƫ*!03+.'ƫ ,0+.Čƫƫ/%010%+*ƫ0$0ƫ%/ƫ)+.!ƫ,.+)%*!*0ƫ3%0$ƫāĀƫ%#%0ƫ0$!.*!0ƫĨĩƫ *!03+.'ƫ !,(+5)!*0/ƫ10ƫ*ƫ$,,!*ƫ%*ƫāƫ*!03+.'ƫ !,(+5)!*0/ƫ/ƫ3!((ċƫ$!ƫ+))+*ƫ1/!ƫ/!ƫ"+.ƫ1/%*#ƫ  ƫ%*ƫāƫ*!03+.'ƫ ,0+.ƫ !,(+5)!*0/ƫ%/ƫ3$!*ƫ0$!ƫ0.þƫ".+)ƫ %û!.!*0ƫ3+.'(+ /ƫ+.ƫ %û!.!*0ƫ1/0+)!.ƫ 2%.01(ƫ)$%*!/ƫ%/ƫ..%! ƫ+2!.ƫ0$!ƫ/)!ƫ*!03+.'ƫ ,0+.ċƫ/ƫ)1(0%,(!ġ3+.'(+ ƫ0.þƫý+3/ƫ0$.+1#$ƫƫ*!03+.'ƫ adaptor, it becomes important to provide I/O resources based on the needs of the workload. With the release of vSphere 5, customers now can make use of the new user-defined network resource pools capability and can allocate I/O resources to the different workloads or different customer virtual machines, depending on their needs. This user-defined network resource pools feature provides the granular control in allocating I/O resources * ƫ)!!0%*#ƫ0$!ƫ/!.2%!ġ(!2!(ƫ#.!!)!*0ƫĨ ĩƫ.!-1%.!)!*0/ƫ"+.ƫ0$!ƫ2%.01(%6! ƫ0%!.ƫāƫ3+.'(+ /ċƫ Bidirectional Traffic Shaping !/% !/ƫ Čƫ0$!.!ƫ%/ƫ*+0$!.ƫ0.þġ/$,%*#ƫ"!01.!ƫ0$0ƫ%/ƫ2%((!ƫ%*ƫ0$!ƫ2,$!.!ƫ,(0"+.)ċƫ 0ƫ*ƫ!ƫ configured on a dvportgroup or dvport level. Customers can shape both inbound and outbound traffic using three parameters: average bandwidth, peak bandwidth and burst size. Customers who want more granular traffic-shaping controls to manage their traffic types can take advantage of this capability of VDS along with the  ƫ"!01.!ċƫ 0ƫ%/ƫ.!+))!* ! ƫ0$0ƫ*!03+.'ƫ )%*%/0.0+./ƫ%*ƫ5+1.ƫ+.#*%60%+*ƫ!ƫ%*2+(2! ƫ3$%(!ƫ+*ü#1.%*#ƫ 0$!/!ƫ#.*1(.ƫ0.þƫ,.)!0!./ċƫ$!/!ƫ+*0.+(/ƫ)'!ƫ/!*/!ƫ+*(5ƫ3$!*ƫ0$!.!ƫ.!ƫ+2!./1/.%,0%+*ƫ/!*.%+/ģ 1/! ƫ5ƫ0$!ƫ+2!./1/.%! ƫ,$5/%(ƫ/3%0$ƫ%*"./0.101.!ƫ+.ƫ2%.01(ƫ%*"./0.101.!ģ0$0ƫ.!ƫ1/%*#ƫ*!03+.'ƫ performance issues. So it is very important to understand the physical and virtual network environment before making any bidirectional traffic-shaping configurations.

TECH N I C AL WH ITE PAPE R / 1 0

VMware vSphere Distributed Switch Best Practices

Physical Network Switch Parameters The configurations of the VDS and the physical network switch should go hand in hand to provide resilient, secure and scalable connectivity to the virtual infrastructure. The following are some key switch configuration parameters the customer should pay attention to. VLAN "ƫ /ƫ.!ƫ1/! ƫ0+ƫ,.+2% !ƫ(+#%(ƫ%/+(0%+*ƫ!03!!*ƫ %û!.!*0ƫ0.þƫ05,!/Čƫ%0ƫ%/ƫ%),+.0*0ƫ0+ƫ)'!ƫ/1.!ƫ0$0ƫ 0$+/!ƫ /ƫ.!ƫ..%! ƫ+2!.ƫ0+ƫ0$!ƫ,$5/%(ƫ/3%0$ƫ%*"./0.101.!ċƫ+ƫ +ƫ/+Čƫ!*(!ƫ2%.01(ƫ/3%0$ƫ0##%*#ƫĨĩƫ +*ƫ0$!ƫ2%.01(ƫ/3%0$Čƫ* ƫ0.1*'ƫ((ƫ /ƫ0+ƫ0$!ƫ,$5/%(ƫ/3%0$ƫ,+.0/ċƫ+.ƫ/!1.%05ƫ.!/+*/Čƫ%0ƫ%/ƫ.!+))!* ! ƫ 0$0ƫ1/0+)!./ƫ*+0ƫ1/!ƫ0$!ƫ ƫ ƫāƫĨ !"1(0ĩƫ"+.ƫ*5ƫ 3.!ƫ%*"./0.101.!ƫ0.þċ Spanning Tree Protocol ,**%*#ƫ.!!ƫ.+0++(ƫĨĩƫ%/ƫ*+0ƫ/1,,+.0! ƫ+*ƫ2%.01(ƫ/3%0$!/Čƫ/+ƫ*+ƫ+*ü#1.0%+*ƫ%/ƫ.!-1%.! ƫ+*ƫċƫ 10ƫ%0ƫ%/ƫ%),+.0*0ƫ0+ƫ!*(!ƫ0$%/ƫ,.+0++(ƫ+*ƫ0$!ƫ,$5/%(ƫ/3%0$!/ċƫƫ)'!/ƫ/1.!ƫ0$0ƫ0$!.!ƫ.!ƫ*+ƫ(++,/ƫ%*ƫ 0$!ƫ*!03+.'ċƫ/ƫƫ!/0ƫ,.0%!Čƫ1/0+)!./ƫ/$+1( ƫ+*ü#1.!ƫ0$!ƫ"+((+3%*#č đƫ/!ƫ+.0/0ƫ+*ƫ*ƫ%ƫ$+/0ġ"%*#ƫ,$5/%(ƫ/3%0$ƫ,+.0/ċƫ%0$ƫ0$%/ƫ/!00%*#Čƫ*!03+.'ƫ+*2!.#!*!ƫ+*ƫ0$!/!ƫ /3%0$ƫ,+.0/ƫ3%((ƫ0'!ƫ,(!ƫ-1%'(5ƫ"0!.ƫ0$!ƫ"%(1.!ƫ!1/!ƫ0$!ƫ,+.0ƫ3%((ƫ!*0!.ƫ0$!ƫƫ"+.3. %*#ƫ/00!ƫ immediately, bypassing the listening and learning states. đƫ/!ƫ0$!ƫ+.0/0ƫ.% #!ƫ.+0++(ƫ0ƫ*%0ƫĨĩƫ#1. ƫ"!01.!ƫ0+ƫ!*"+.!ƫ0$!ƫƫ+1* .5ċƫ$%/ƫ+*ü#1.0%+*ƫ ,.+0!0/ƫ#%*/0ƫ*5ƫ%*2(% ƫ !2%!ƫ+**!0%+*ƫ+*ƫ0$!ƫ%ƫ$+/0ġ"%*#ƫ!//ƫ/3%0$ƫ,+.0/ċƫ/ƫ3/ƫ,.!2%+1/(5ƫ )!*0%+*! Čƫƫ +!/*Ě0ƫ/1,,+.0ƫČƫ/+ƫ%0ƫ +!/*Ě0ƫ/!* ƫ*5ƫƫ".)!/ƫ0+ƫ0$!ƫ/3%0$ƫ,+.0ċƫ+3!2!.Čƫ%"ƫ*5ƫ ƫ%/ƫ/!!*ƫ+*ƫ0$!/!ƫ%ƫ$+/0ġ"%*#ƫ!//ƫ/3%0$ƫ,+.0/Čƫ0$!ƫƫ#1. ƫ"!01.!ƫ,10/ƫ0$0ƫ,.0%1(.ƫ switch port in error-disabled state. The switch port is completely shut down and prevents affecting the Spanning Tree Topology. $!ƫ.!+))!* 0%+*ƫ+"ƫ!*(%*#ƫ+.0/0ƫ* ƫ0$!ƫƫ#1. ƫ"!01.!ƫ+*ƫ0$!ƫ/3%0$ƫ,+.0/ƫ%/ƫ2(% ƫ+*(5ƫ3$!*ƫ customers connect nonswitching/bridging devices to these ports. The switching/bridging devices can be hardware-based physical boxes or servers running a software-based switching/bridging function. Customers /$+1( ƫ)'!ƫ/1.!ƫ0$0ƫ0$!.!ƫ%/ƫ*+ƫ/3%0$%*#ĥ.% #%*#ƫ"1*0%+*ƫ!*(! ƫ+*ƫ0$!ƫ%ƫ$+/0/ƫ0$0ƫ.!ƫ+**!0! ƫ0+ƫ the physical switch ports. +3!2!.Čƫ%*ƫ0$!ƫ/!*.%+ƫ3$!.!ƫ0$!ƫ%ƫ$+/0ƫ$/ƫƫ#1!/0ƫ2%.01(ƫ)$%*!ƫ0$0ƫ%/ƫ+*ü#1.! ƫ0+ƫ,!."+.)ƫƫ .% #%*#ƫ"1*0%+*Čƫ0$!ƫ2%.01(ƫ)$%*!ƫ3%((ƫ#!*!.0!ƫƫ".)!/ƫ* ƫ/!* ƫ0$!)ƫ+10ƫ0+ƫ0$!ƫČƫ3$%$ƫ0$!*ƫ "+.3. /ƫ0$!ƫƫ".)!/ƫ0$.+1#$ƫ0$!ƫ*!03+.'ƫ ,0+.ƫ0+ƫ0$!ƫ,$5/%(ƫ/3%0$ƫ,+.0ċƫ$!*ƫ0$!ƫ/3%0$ƫ,+.0ƫ +*ü#1.! ƫ3%0$ƫƫ#1. ƫ.!!%2!/ƫ0$!ƫƫ".)!Čƫ0$!ƫ/3%0$ƫ3%((ƫ %/(!ƫ0$!ƫ,+.0ƫ* ƫ0$!ƫ2%.01(ƫ)$%*!ƫ will lose connectivity. To avoid this network failure scenario when running the software bridging function on *ƫ%ƫ$+/0Čƫ1/0+)!./ƫ/$+1( ƫ %/(!ƫ0$!ƫ+.0/0ƫ* ƫƫ#1. ƫ+*ü#1.0%+*ƫ+*ƫ0$!ƫ,$5/%(ƫ/3%0$ƫ,+.0ƫ * ƫ.1*ƫċ "ƫ1/0+)!./ƫ.!ƫ+*!.*! ƫ+10ƫ$'/ƫ0$0ƫ*ƫ#!*!.0!ƫƫ".)!/Čƫ0$!5ƫ/$+1( ƫ)'!ƫ1/!ƫ+"ƫ  3.!ƫ2$%!( ƫ,,ĴČƫ3$%$ƫ*ƫ(+'ƫ0$!ƫ".)!/ƫ* ƫ,.+0!0ƫ0$!ƫ2%.01(ƫ%*"./0.101.!ƫ".+)ƫ/1$ƫ(5!.ƫĂƫ 00'/ċƫ!"!.ƫ0+ƫ 3.!ƫ2$%!( Ĵƫ,.+ 10ƫ +1)!*00%+*ƫ"+.ƫ)+.!ƫ !0%(/ƫ+*ƫ$+3ƫ0+ƫ/!1.!ƫ5+1.ƫ2,$!.!ƫ virtual infrastructure: http://www.vmware.com/products/vshield/overview.html. Link Aggregation Setup Link aggregation is used to increase throughput and improve resiliency by combining multiple network connections. There are various proprietary solutions on the market along with vendor-independent ƫĉĀĂċă ƫĨ ĩƫ/0* . ġ/! ƫ%),(!)!*00%+*ċƫ((ƫ/+(10%+*/ƫ!/0(%/$ƫƫ(+#%(ƫ$**!(ƫ!03!!*ƫ0$!ƫ03+ƫ endpoints, using multiple physical links. In the vSphere virtual infrastructure, the two ends of the logical channel are the VDS and physical switch. These two switches must be configured with link aggregation parameters before the logical channel is established. Currently, VDS supports static link aggregation configuration and does *+0ƫ,.+2% !ƫ/1,,+.0ƫ"+.ƫ 5*)%ƫ ċƫ$!*ƫ1/0+)!./ƫ3*0ƫ0+ƫ!*(!ƫ(%*'ƫ##.!#0%+*ƫ+*ƫƫ,$5/%(ƫ/3%0$Čƫ 0$!5ƫ/$+1( ƫ+*ü#1.!ƫ/00%ƫ(%*'ƫ##.!#0%+*ƫ+*ƫ0$!ƫ,$5/%(ƫ/3%0$ƫ* ƫ/!(!0ƫ ƫ$/$ƫ/ƫ*!03+.'ƫ ,0+.ƫ teaming on the VDS.

TECH N I C AL WH ITE PAPE R / 11

VMware vSphere Distributed Switch Best Practices

When establishing the logical channel with multiple physical links, customers should make sure that the 0$!.*!0ƫ*!03+.'ƫ ,0+.ƫ+**!0%+*/ƫ".+)ƫ0$!ƫ$+/0ƫ.!ƫ0!.)%*0! ƫ+*ƫƫ/%*#(!ƫ,$5/%(ƫ/3%0$ċƫ+3!2!.Čƫ%"ƫ 1/0+)!./ƫ$2!ƫ !,(+5! ƫ(1/0!.! ƫ,$5/%(ƫ/3%0$ƫ0!$*+(+#5Čƫ0$!ƫ0$!.*!0ƫ*!03+.'ƫ ,0+.ƫ+**!0%+*/ƫ*ƫ be terminated on two different physical switches. The clustered physical switch technology is referred to by different names by networking vendors. For example, Cisco calls their switch clustering solution Virtual 3%0$%*#ƫ5/0!)Ďƫ.+ !ƫ((/ƫ0$!%./ƫ%.01(ƫ(1/0!.ƫ3%0$%*#ċƫ!"!.ƫ0+ƫ0$!ƫ*!03+.'%*#ƫ2!* +.ƫ#1% !(%*!/ƫ and configuration details when deploying switch clustering technology. Link-State Tracking Link-state tracking is a feature available on Cisco switches to manage the link state of downstream ports, ports connected to servers, based on the status of upstream ports, ports connected to aggregation/core switches. When there is any failure on the upstream links connected to aggregation or core switches, the associated downstream link status goes down. The server connected on the downstream link is then able to detect the failure and reroute the traffic on other working links. This feature therefore provides the protection from network "%(1.!/ƫ 1!ƫ0+ƫ0$!ƫ"%(! ƫ1,/0.!)ƫ,+.0/ƫ%*ƫ*+*)!/$ƫ0+,+(+#%!/ċƫ*"+.01*0!(5Čƫ0$%/ƫ"!01.!ƫ%/ƫ*+0ƫ2%((!ƫ+*ƫ all vendors’ switches, and even if it is available, it might not be referred to as link-state tracking. Customers should talk to the switch vendors to find out whether a similar feature is supported on their switches. Figure 3 shows the resilient mesh topology on the left and a simple loop-free topology on the right. VMware highly recommends deploying the mesh topology shown on the left, which provides highly reliable redundant design and doesn’t need a link-state tracking feature. Customers who don’t have high-end networking expertise and are also limited in number of switch ports might prefer the deployment shown on the right. In this !,(+5)!*0Čƫ1/0+)!./ƫ +*Ě0ƫ$2!ƫ0+ƫ.1*ƫƫ!1/!ƫ0$!.!ƫ.!ƫ*+ƫ(++,/ƫ%*ƫ0$!ƫ*!03+.'ƫ !/%#*ċƫ$!ƫ +3*/% !ƫ of this simple design is seen when there is a failure in the link between the access and aggregation switches. In that failure scenario, the server will continue to send traffic on the same network adaptor even when the access layer switch is dropping the traffic at the upstream interface. To avoid this blackholing of server traffic, customers can enable link-state tracking on the virtual and physical switches and indicate any failure between access and aggregation switch layers to the server through link-state information.

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

vSphere Distributed Switch ESXi

VM

VM

VM

VM

VM

vSphere Distributed Switch

ESXi

ESXi

ESXi

Access Layer Loop

Aggregation Layer

Resilient mesh topology with loops. Need STP.

Resilient topology with no loops. No STP but need link-state tracking.

Figure 3. Resilient Loop and No-Loop Topologies

TECH N I C AL WH ITE PAPE R / 12

VMware vSphere Distributed Switch Best Practices

ƫ$/ƫ !"1(0ƫ*!03+.'ƫ"%(+2!.ƫ !0!0%+*ƫ+*ü#1.0%+*ƫ/!0ƫ/ƫė(%*'ƫ/001/ƫ+*(5ċĘƫ1/0+)!./ƫ/$+1( ƫ'!!,ƫ0$%/ƫ configuration if they are enabling the link-state tracking feature on physical switches. If link-state tracking capability is not available on physical switches, and there are no redundant paths available in the design, customers can make use of the beacon probing feature available on VDS. The beacon probing function is a software solution available on virtual switches for detecting link failures upstream from the access layer physical /3%0$ƫ0+ƫ0$!ƫ##.!#0%+*ĥ+.!ƫ/3%0$!/ċƫ!+*ƫ,.+%*#ƫ%/ƫ)+/0ƫ1/!"1(ƫ3%0$ƫ0$.!!ƫ+.ƫ)+.!ƫ1,(%*'/ƫ%*ƫƫ0!)ċƫ

Maximum Transmission Unit

'!ƫ/1.!ƫ0$0ƫ0$!ƫ)4%)1)ƫ0.*/)%//%+*ƫ1*%0ƫĨ ĩƫ+*ü#1.0%+*ƫ)0$!/ƫ.+//ƫ0$!ƫ2%.01(ƫ* ƫ,$5/%(ƫ network switch infrastructure.

Rack Server in Example Deployment "0!.ƫ(++'%*#ƫ0ƫ0$!ƫ)&+.ƫ+),+*!*0/ƫ%*ƫ0$!ƫ!4),(!ƫ !,(+5)!*0ƫ* ƫ'!5ƫ2%.01(ƫ* ƫ,$5/%(ƫ/3%0$ƫ parameters, let’s take a look at the different types of servers that customers can have in their environment. 1/0+)!./ƫ*ƫ !,(+5ƫ*ƫ%ƫ$+/0ƫ+*ƫ!%0$!.ƫƫ.'ƫ/!.2!.ƫ+.ƫƫ( !ƫ/!.2!.ċƫ$%/ƫ/!0%+*ƫ %/1//!/ƫƫ !,(+5)!*0ƫ%*ƫ3$%$ƫ0$!ƫ%ƫ$+/0ƫ%/ƫ.1**%*#ƫ+*ƫƫ.'ƫ/!.2!.ċƫ3+ƫ05,!/ƫ+"ƫ.'ƫ/!.2!.ƫ+*ü#1.0%+*ƫ3%((ƫ!ƫ described in the following section: đƫ'ƫ/!.2!.ƫ3%0$ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ đƫ'ƫ/!.2!.ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ The various VDS design approaches will be discussed for each of the two configurations.

Rack Server with Eight 1GbE Network Adaptors *ƫƫ.'ƫ/!.2!.ƫ !,(+5)!*0ƫ3%0$ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ƫ,!.ƫ$+/0Čƫ1/0+)!./ƫ*ƫ!%0$!.ƫ1/!ƫ0$!ƫ0. %0%+*(ƫ static design approach of allocating network adaptors to each traffic type or make use of advanced features of ƫ/1$ƫ/ƫ ƫ* ƫ ċƫ$!ƫ ƫ* ƫ ƫ"!01.!/ƫ$!(,ƫ,.+2% !ƫƫ 5*)%ƫ !/%#*ƫ0$0ƫ!þ%!*0(5ƫ10%(%6!/ƫ ĥƫ resources. In this section, both the traditional and new design approaches are described, along with their pros and cons. Design Option 1 – Static Configuration This design option follows the traditional approach of statically allocating network resources to the different 2%.01(ƫ%*"./0.101.!ƫ0.þƫ05,!/ċƫ/ƫ/$+3*ƫ%*ƫ%#1.!ƫąČƫ!$ƫ$+/0ƫ$/ƫ!%#$0ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ċƫ+1.ƫ .!ƫ+**!0! ƫ0+ƫ+*!ƫ+"ƫ0$!ƫü./0ƫ!//ƫ(5!.ƫ/3%0$!/Ďƫ0$!ƫ+0$!.ƫ"+1.ƫ.!ƫ+**!0! ƫ0+ƫ0$!ƫ/!+* ƫ!//ƫ(5!.ƫ switch, to avoid single point of failure. Let’s look in detail at how VDS parameters are configured.

TECH N I C AL WH ITE PAPE R / 13

VMware vSphere Distributed Switch Best Practices

Cluster 1 VM

VM

VM

VM

VM

VM

VM

VM

Cluster 2 VM

VM

VM

VM

VM

VM

VM

VM

vSphere Distributed Switch ESXi

ESXi

ESXi

ESXi

...................

Access Layer

Legend: Aggregation Layer

PG-A PG-B

Figure 4. Rack Server with Eight 1GbE Network Adaptors

dvuplink Configuration +ƫ/1,,+.0ƫ0$!ƫ)4%)1)ƫ+"ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ƫ,!.ƫ$+/0Čƫ0$!ƫ 21,(%*'ƫ,+.0ƫ#.+1,ƫ%/ƫ+*üƫ#1.! ƫ3%0$ƫ !%#$0ƫ 21,(%*'/ƫĨ 21,(%*'āĢ 21,(%*'ĉĩċƫ*ƫ0$!ƫ$+/0/Čƫ 21,(%*'āƫ%/ƫ//+%0! ƫ3%0$ƫ2)*%ĀČƫ 21,(%*'Ăƫ%/ƫ//+%0! ƫ with vmnic1, and so on. It is a recommended practice to change the names of the dvuplinks to something meaningful and easy to track. For example, dvuplink1, which gets associated with vmnic on a motherboard, can !ƫ.!*)! ƫ/ƫė  ġ1,(%*'āĘĎƫ 21,(%*'ĂČƫ3$%$ƫ#!0/ƫ//+%0! ƫ3%0$ƫ2)*%ƫ+*ƫ*ƫ!4,*/%+*ƫ. Čƫ*ƫ!ƫ .!*)! ƫ/ƫė4,*/%+*ġ1,(%*'āċĘ "ƫ0$!ƫ$+/0/ƫ$2!ƫ/+)!ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ/ƫ ƫ+*ƫ)+0$!.+. ƫĨ  ĩƫ* ƫ/+)!ƫ+*ƫ!4,*/%+*ƫ. /Čƫ for a better resiliency story, VMware recommends selecting one network adaptor from LOM and one from an expansion card when configuring network adaptor teaming. To configure this teaming on a VDS, administrators must pay attention to the dvuplink and vmnic association along with dvportgroup configuration where network adaptor teaming is enabled. In the network adaptor teaming configuration on a dvportgroup, administrators must choose the various dvuplinks that are part of a team. If the dvuplinks are named appropriately according to 0$!ƫ$+/0ƫ2)*%ƫ//+%0%+*Čƫ )%*%/0.0+./ƫ*ƫ/!(!0ƫė  ġ1,(%*'āĘƫ* ƫė4,*/%+*ġ1,(%*'āĘƫ3$!*ƫ+*üƫ#1.%*#ƫ the teaming option for a dvportgroup. dvportgroup Configuration /ƫ !/.%! ƫ%*ƫ(!ƫĂČƫ0$!.!ƫ.!ƫüƫ2!ƫ %ûƫ!.!*0ƫ,+.0ƫ#.+1,/ƫ0$0ƫ.!ƫ+*üƫ#1.! ƫ"+.ƫ0$!ƫüƫ2!ƫ %ûƫ!.!*0ƫ0.þ ƫƫƫ05,!/ċƫ Customers can create up to 5,000 unique port groups per VDS. In this example deployment, the decision on creating different port groups is based on the number of traffic types. +. %*#ƫ0+ƫ(!ƫĂČƫ 2,+.0#.+1,ƫġƫ%/ƫ.!0! ƫ"+.ƫ0$!ƫ)*#!)!*0ƫ0.þ ƫƫƫ05,!ċƫ$!.!ƫ.!ƫ+0$!.ƫ 2,+.0#.+1,/ƫ !üƫ*! ƫ"+.ƫ0$!ƫ+0$!.ƫ0.þ ƫƫƫ05,!/ċƫ$!ƫ"+((+3%*#ƫ.!ƫ0$!ƫ'!5ƫ+*üƫ#1.0%+*/ƫ+"ƫ 2,+.0#.+1,ƫġč đƫ!)%*#ƫ+,0%+*čƫ4,(%%0ƫ"%(+2!.ƫ+. !.ƫ,.+2% !/ƫƫ !0!.)%*%/0%ƫ35ƫ+"ƫ %.!0%*#ƫ0.þ ƫƫƫ0+ƫƫ,.0%1(.ƫ1,(%*'ċƫ 5ƫ/!(!0%*#ƫ 21,(%*'āƫ/ƫ*ƫ0%2!ƫ1,(%*'ƫ* ƫ 21,(%*'Ăƫ/ƫƫ/0* 5ƫ1,(%*'Čƫ)*#!)!*0ƫ0.þ ƫƫƫ3%((ƫ!ƫ..%! ƫ +2!.ƫ 21,(%*'āƫ1*(!//ƫ0$!.!ƫ%/ƫƫ"%(1.!ƫ+*ƫ 21,(%*'āċƫ((ƫ+0$!.ƫ 21,(%*'/ƫ.!ƫ+*üƫ#1.! ƫ/ƫ1*1/! ċƫ+*üƫ#1.%*#ƫ 0$!ƫ"%('ƫ+,0%+*ƫ0+ƫė+Ęƫ%/ƫ(/+ƫ.!+))!* ! Čƫ0+ƫ2+% ƫ0$!ƫýƫ,,%*#ƫ+"ƫ0.þ ƫƫƫ!03!!*ƫ03+ƫ*!03+.'ƫ ,0+./ċƫ

TECH N I C AL WH ITE PAPE R / 14

VMware vSphere Distributed Switch Best Practices

The failback option determines how a physical adaptor is returned to active duty after recovering from a "%(1.!ċƫ "ƫ"%('ƫ%/ƫ/!0ƫ0+ƫė+ČĘƫƫ"%(! ƫ ,0+.ƫ%/ƫ(!"0ƫ%*0%2!Čƫ!2!*ƫ"0!.ƫ.!+2!.5Čƫ1*0%(ƫ*+0$!.ƫ1..!*0(5ƫ active adaptor fails and requires a replacement. đƫ 3.!ƫ.!+))!* /ƫ%/+(0%*#ƫ((ƫ0.þƫ05,!/ƫ".+)ƫ!$ƫ+0$!.ƫ5ƫ !ü*%*#ƫƫ/!,.0!ƫ ƫ"+.ƫ!$ƫ dvportgroup. đƫ$!.!ƫ.!ƫ/!2!.(ƫ+0$!.ƫ,.)!0!./ƫ0$0ƫ.!ƫ,.0ƫ+"ƫ0$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ċƫ1/0+)!./ƫ*ƫ$++/!ƫ to configure these parameters based on their environment needs. For example, customers can configure  ƫ0+ƫ,.+2% !ƫ%/+(0%+*ƫ3$!*ƫ0$!.!ƫ.!ƫ(%)%0! ƫ /ƫ2%((!ƫ%*ƫ0$!ƫ!*2%.+*)!*0ċ /ƫ5+1ƫ"+((+3ƫ0$!ƫ 2,+.0#.+1,/ƫ+*ü#1.0%+*ƫ%*ƫ(!ƫĂČƫ5+1ƫ*ƫ/!!ƫ0$0ƫ!$ƫ0.þƫ05,!ƫ%/ƫ..%! ƫ+2!.ƫƫ specific dvuplink, with the exception of the virtual machine traffic type. The virtual machine traffic type uses 03+ƫ0%2!ƫ(%*'/Čƫ 21,(%*'Ĉƫ* ƫ 21,(%*'ĉČƫ* ƫ0$!/!ƫ(%*'/ƫ.!ƫ10%(%6! ƫ0$.+1#$ƫ0$!ƫ ƫ(#+.%0$)ċƫ/ƫ3/ƫ ,.!2%+1/(5ƫ)!*0%+*! Čƫ0$!ƫ ƫ(#+.%0$)ƫ%/ƫ)1$ƫ)+.!ƫ!þ%!*0ƫ0$*ƫ0$!ƫ/0* . ƫ$/$%*#ƫ(#+.%0$)ƫ%* utilizing link bandwidth. TRAFFIC TYPE

M A N AG E M E N T

v M OT I O N

FT

ISCSI

V I R T UA L M AC H I N E

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

UNUSED UPLINK

ġ

4,(%%0ƫ Failover

dvuplink1

dvuplink2

ăČƫąČƫĆČƫćČƫĈČƫĉ

ġ

4,(%%0ƫ Failover

dvuplink3

dvuplink4

āČƫĂČƫĆČƫćČƫĈČƫĉ

ġ

4,(%%0ƫ Failover

dvuplink4

dvuplink3

āČƫĂČƫĆČƫćČƫĈČƫĉ

ġ

4,(%%0ƫ Failover

dvuplink5

dvuplink6

āČƫĂČƫăČƫąČƫĈČƫĉ

ġ



dvuplink7/ 21,(%*'ĉ

+*!

1, 2, 3, 4, 5, 6

Table 2. Static Design Configuration

Physical Switch Configuration $!ƫ!40!.*(ƫ,$5/%(ƫ/3%0$ģ3$!.!ƫ0$!ƫ.'ƫ/!.2!./Ěƫ*!03+.'ƫ ,0+./ƫ.!ƫ+**!0! ƫ0+ģ%/ƫ+*ü#1.! ƫ3%0$ƫ 0.1*'ƫ+*ü#1.0%+*ƫ3%0$ƫ((ƫ0$!ƫ,,.+,.%0!ƫ /ƫ!*(! ċƫ/ƫ !/.%! ƫ%*ƫ0$!ƫė$5/%(ƫ!03+.'ƫ3%0$ƫ .)!0!./Ęƫ/!0%+*Čƫ0$!ƫ"+((+3%*#ƫ/3%0$ƫ+*ü#1.0%+*/ƫ.!ƫ,!."+.)! ƫ/! ƫ+*ƫ0$!ƫƫ/!01,ƫ !/.%! ƫ in Table 2. đƫ*(!ƫƫ+*ƫ0$!ƫ0.1*'ƫ,+.0/ƫ"%*#ƫ0$!ƫ%ƫ$+/0/Čƫ(+*#ƫ3%0$ƫ0$!ƫ+.0/0ƫ)+ !ƫ* ƫƫ#1. ƫ"!01.!ċ đƫ$!ƫ0!)%*#ƫ+*ü#1.0%+*ƫ+*ƫƫ%/ƫ/00%Čƫ/+ƫ*+ƫ(%*'ƫ##.!#0%+*ƫ%/ƫ+*ü#1.! ƫ+*ƫ0$!ƫ,$5/%(ƫ/3%0$!/ċƫ đƫ!1/!ƫ+"ƫ0$!ƫ)!/$ƫ0+,+(+#5ƫ !,(+5)!*0Čƫ/ƫ/$+3*ƫ%*ƫ%#1.!ƫąČƫ0$!ƫ(%*'ġ/00!ƫ0.'%*#ƫ"!01.!ƫ%/ƫ*+0ƫ.!-1%.! ƫ on the physical switches. In this design approach, resiliency to the infrastructure traffic is achieved through active/standby uplinks, and /!1.%05ƫ%/ƫ+),(%/$! ƫ5ƫ,.+2% %*#ƫ/!,.0!ƫ,$5/%(ƫ,0$/ƫ"+.ƫ0$!ƫ %û!.!*0ƫ0.þƫ05,!/ċƫ+3!2!.Čƫ3%0$ƫ0$%/ƫ design, the I/O resources are underutilized because the dvuplink2 and dvuplink6 standby links are not used to /!* ƫ+.ƫ.!!%2!ƫ0.þċƫ(/+Čƫ0$!.!ƫ%/ƫ*+ƫý!4%%(%05ƫ0+ƫ((+0!ƫ)+.!ƫ* 3% 0$ƫ0+ƫƫ0.þƫ05,!ƫ3$!*ƫ%0ƫ*!! /ƫ%0ċ

TECH N I C AL WH ITE PAPE R / 15

VMware vSphere Distributed Switch Best Practices

There is another variation to the static design approach that addresses the need of some customers to provide higher bandwidth to the storage and vMotion traffic type. In the static design that was previously described, % ƫ* ƫ2 +0%+*ƫ0.þƫ%/ƫ(%)%0! ƫ0+ƫāċƫ "ƫƫ1/0+)!.ƫ3*0/ƫ0+ƫ/1,,+.0ƫ$%#$!.ƫ* 3% 0$ƫ"+.ƫ% Čƫ0$!5ƫ*ƫ )'!ƫ1/!ƫ+"ƫ0$!ƫ% ƫ)1(0%,0$%*#ƫ/+(10%+*ċƫ(/+Čƫ3%0$ƫ0$!ƫ.!(!/!ƫ+"ƫ2,$!.!ƫĆČƫ2 +0%+*ƫ0.þƫ*ƫ!ƫ..%! ƫ +2!.ƫ)1(0%,(!ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ0$.+1#$ƫ0$!ƫ/1,,+.0ƫ+"ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*Čƫ0$!.!5ƫ providing higher bandwidth to the vMotion process. For more details on how to set up iSCSI multipathing, refer to the VMware vSphere Storage guide: https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html. $!ƫ+*ü#1.0%+*ƫ+"ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*ƫ%/ƫ-1%0!ƫ/%)%(.ƫ0+ƫ0$!ƫ% ƫ)1(0%,0$ƫ/!01,Čƫ3$!.!ƫ administrators must create two separate vmkernel interfaces and bind each one to a separate dvportgroup. $%/ƫ+*ü#1.0%+*ƫ3%0$ƫ03+ƫ/!,.0!ƫ 2,+.0#.+1,/ƫ,.+2% !/ƫ0$!ƫ+**!0%2%05ƫ0+ƫ03+ƫ %û!.!*0ƫ0$!.*!0ƫ*!03+.'ƫ adaptors or dvuplinks. TRAFFIC TYPE

M A N AG E M E N T

v M OT I O N

v M OT I O N

FT

ISCSI

ISCSI

V I R T UA L M AC H I N E

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

UNUSED UPLINK

ġ

4,(%%0ƫ Failover

dvuplink1

dvuplink2

ăČƫąČƫĆČƫćČƫĈČƫĉ

ġā

+*!

dvuplink3

dvuplink4

āČƫĂČƫĆČƫćČƫĈČƫĉ

ġĂ

+*!

dvuplink4

dvuplink3

āČƫĂČƫĆČƫćČƫĈČƫĉ

ġ

4,(%%0ƫ Failover

dvuplink2

dvuplink1

ăČƫąČƫĆČƫćČƫĈČƫĉ

ġā

+*!

dvuplink5

+*!

1, 2, 3, 4, 6, 7, ĉ

ġĂ

+*!

dvuplink6

+*!

1, 2, 3, 4, 5, 7, ĉ

ġ



dvuplink7/ 21,(%*'ĉ

+*!

1, 2, 3, 4, 5, 6

Table 3. Static Design Configuration with iSCSI Multipathing and Multi–Network Adaptor vMotion

/ƫ/$+3*ƫ%*ƫ(!ƫăČƫ0$!.!ƫ.!ƫ03+ƫ!*0.%!/ƫ!$ƫ"+.ƫ0$!ƫ2 +0%+*ƫ* ƫ% ƫ0.þƫ05,!/ċƫ(/+ƫ/$+3*ƫ%/ƫƫ(%/0ƫ+"ƫ 0$!ƫ %0%+*(ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*/ƫ.!-1%.! ƫ0+ƫ/1,,+.0ƫ0$!ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*ƫ* ƫ% ƫ )1(0%,0$%*#ƫ,.+!//!/ċƫ+.ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*Čƫ 2,+.0#.+1,/ƫġāƫ* ƫġĂƫ.!ƫ(%/0! Čƫ +*ü#1.! ƫ3%0$ƫ 21,(%*'ƫăƫ* ƫ 21,(%*'ąƫ.!/,!0%2!(5ƫ/ƫ0%2!ƫ(%*'/ċƫ* ƫ"+.ƫ% ƫ)1(0%,0$%*#Čƫ 2,+.0#.+1,/ƫ ġāƫ* ƫġĂƫ.!ƫ+**!0! ƫ0+ƫ 21,(%*'Ćƫ* ƫ 21,(%*'ćƫ.!/,!0%2!(5ƫ/ƫ0%2!ƫ(%*'/ċƫ + ƫ(*%*#ƫ.+//ƫ 0$!ƫ)1(0%,(!ƫ 21,(%*'/ƫ%/ƫ,!."+.)! ƫ5ƫ0$!ƫ)1(0%,0$%*#ƫ(+#%ƫ%*ƫ0$!ƫ% ƫ,.+!//ƫ* ƫ5ƫ0$!ƫ%ƫ,(0"+.)ƫ%*ƫ the vMotion process. Configuring the teaming policies for these dvportgroups is not required. FT, management and virtual machine traffic-type dvportgroup configuration and physical switch configuration "+.ƫ0$%/ƫ !/%#*ƫ.!)%*ƫ0$!ƫ/)!ƫ/ƫ0$+/!ƫ !/.%! ƫ%*ƫė!/%#*ƫ,0%+*ƫāĘƫ+"ƫ0$!ƫ,.!2%+1/ƫ/!0%+*ċ

TECH N I C AL WH ITE PAPE R / 1 6

VMware vSphere Distributed Switch Best Practices

This static design approach improves on the first design by using advanced capabilities such as iSCSI )1(0%,0$%*#ƫ* ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*ċƫ10ƫ0ƫ0$!ƫ/)!ƫ0%)!Čƫ0$%/ƫ+,0%+*ƫ$/ƫ0$!ƫ/)!ƫ$((!*#!/ƫ related to underutilized resources and inflexibility in allocating additional resources on the fly to different traffic types. Design Option 2 – Dynamic Configuration with NIOC and LBT "0!.ƫ(++'%*#ƫ0ƫ0$!ƫ0. %0%+*(ƫ !/%#*ƫ,,.+$ƫ3%0$ƫ/00%ƫ1,(%*'ƫ+*ü#1.0%+*/Čƫ(!0Ě/ƫ0'!ƫƫ(++'ƫ0ƫ0$!ƫ 3.!ġ .!+))!* ! ƫ !/%#*ƫ+,0%+*ƫ0$0ƫ0'!/ƫ 2*0#!ƫ+"ƫ0$!ƫ 2*! ƫƫ"!01.!/ƫ/1$ƫ/ƫ ƫ* ƫ ċ In this design, the connectivity to the physical network infrastructure remains the same as that described in the /00%ƫ !/%#*ƫ+,0%+*ċƫ+3!2!.Čƫ%*/0! ƫ+"ƫ((+0%*#ƫ/,!%üƫ 21,(%*'/ƫ0+ƫ%* %2% 1(ƫ0.þƫ05,!/Čƫ0$!ƫ%ƫ platform utilizes those dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure traffic type’s bandwidth utilization is estimated. In a real deployment, customers should first monitor the virtual infrastructure traffic over a period of time, to gauge the bandwidth utilization, and then come up with bandwidth numbers for each traffic type. The following are some bandwidth numbers estimated by traffic type: đƫ *#!)!*0ƫ0.þƫĨŕāĩ đƫ2 +0%+*ƫĨāĩ đƫƫĨāĩ đƫ% ƫĨāĩ đƫ%.01(ƫ)$%*!ƫĨĂĩ /! ƫ+*ƫ0$%/ƫ* 3% 0$ƫ%*"+.)0%+*Čƫ )%*%/0.0+./ƫ*ƫ,.+2%/%+*ƫ,,.+,.%0!ƫ ĥƫ.!/+1.!/ƫ0+ƫ!$ƫ0.þƫ05,!ƫ 5ƫ1/%*#ƫ0$!ƫ ƫ"!01.!ƫ+"ƫċƫ !0Ě/ƫ0'!ƫƫ(++'ƫ0ƫ0$!ƫƫ,.)!0!.ƫ+*ü#1.0%+*/ƫ"+.ƫ0$%/ƫ !/%#*Čƫ/ƫ3!((ƫ /ƫ0$!ƫ ƫ/!01,ċƫ$!ƫ 21,(%*'ƫ,+.0ƫ#.+1,ƫ+*ü#1.0%+*ƫ.!)%*/ƫ0$!ƫ/)!Čƫ3%0$ƫ!%#$0ƫ 21,(%*'/ƫ.!0! ƫ"+.ƫ0$!ƫ !%#$0ƫāƫƫ*!03+.'ƫ ,0+./ċƫ$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ƫ%/ƫ !/.%! ƫ%*ƫ0$!ƫ"+((+3%*#ƫ/!0%+*ċ dvportgroup Configuration In this design, all dvuplinks are active and there are no standby and unused uplinks, as shown in Table 4. ((ƫ 21,(%*'/ƫ.!ƫ0$!.!"+.!ƫ2%((!ƫ"+.ƫ1/!ƫ5ƫ0$!ƫ0!)%*#ƫ(#+.%0$)ċƫ$!ƫ"+((+3%*#ƫ.!ƫ0$!ƫ'!5ƫ,.)!0!.ƫ +*ü#1.0%+*/ƫ+"ƫ 2,+.0#.+1,ƫġč đƫ!)%*#ƫ+,0%+*čƫ ƫ%/ƫ/!(!0! ƫ/ƫ0$!ƫ0!)%*#ƫ(#+.%0$)ċƫ%0$ƫ ƫ+*ü#1.0%+*Čƫ0$!ƫ)*#!)!*0ƫ0.þƫ initially will be scheduled based on the virtual port ID hash. Depending on the hash output, management traffic is sent out over one of the dvuplinks. Other traffic types in the virtual infrastructure can also be scheduled on 0$!ƫ/)!ƫ 21,(%*'ƫ%*%0%((5ċƫ+3!2!.Čƫ3$!*ƫ0$!ƫ10%(%60%+*ƫ+"ƫ0$!ƫ 21,(%*'ƫ#+!/ƫ!5+* ƫ0$!ƫĈĆƫ,!.!*0ƫ0$.!/$+( Čƫ 0$!ƫ ƫ(#+.%0$)ƫ3%((ƫ!ƫ%*2+'! ƫ* ƫ/+)!ƫ+"ƫ0$!ƫ0.þƫ3%((ƫ!ƫ)+2! ƫ0+ƫ+0$!.ƫ1* !.10%(%6! ƫ 21,(%*'/ċƫ 0ƫ%/ƫ ,+//%(!ƫ0$0ƫ)*#!)!*0ƫ0.þƫ3%((ƫ!ƫ)+2! ƫ0+ƫ+0$!.ƫ 21,(%*'/ƫ3$!*ƫ/1$ƫ*ƫ ƫ!2!*0ƫ+1./ċ đƫ$!ƫ"%('ƫ+,0%+*ƫ)!*/ƫ#+%*#ƫ".+)ƫ1/%*#ƫƫ/0* 5ƫ(%*'ƫ0+ƫ1/%*#ƫ*ƫ0%2!ƫ1,(%*'ƫ"0!.ƫ0$!ƫ0%2!ƫ1,(%*'ƫ comes back into operation after a failure. This failback option works when there are active and standby dvuplink configurations. In this design, there are no standby dvuplinks. So when an active uplink fails, the traffic flowing on that dvuplink is moved to another working dvuplink. If the failed dvuplink comes back, 0$!ƫ ƫ(#+.%0$)ƫ3%((ƫ/$! 1(!ƫ*!3ƫ0.þƫ+*ƫ0$0ƫ 21,(%*'ċƫ$%/ƫ+,0%+*ƫ%/ƫ(!"0ƫ/ƫ0$!ƫ !"1(0ċƫ đƫ 3.!ƫ.!+))!* /ƫ%/+(0%*#ƫ((ƫ0.þƫ05,!/ƫ".+)ƫ!$ƫ+0$!.ƫ5ƫ !ü*%*#ƫƫ/!,.0!ƫ ƫ"+.ƫ!$ƫ dvportgroup. đƫ$!.!ƫ.!ƫ/!2!.(ƫ+0$!.ƫ,.)!0!./ƫ0$0ƫ.!ƫ,.0ƫ+"ƫ0$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ċƫ1/0+)!./ƫ*ƫ$++/!ƫ0+ƫ +*ü#1.!ƫ0$!/!ƫ,.)!0!./ƫ/! ƫ+*ƫ0$!%.ƫ!*2%.+*)!*0ƫ*!! /ċƫ+.ƫ!4),(!Čƫ0$!5ƫ*ƫ+*ü#1.!ƫ ƫ0+ƫ ,.+2% !ƫ%/+(0%+*ƫ3$!*ƫ0$!.!ƫ.!ƫ(%)%0! ƫ /ƫ2%((!ƫ%*ƫ0$!ƫ!*2%.+*)!*0ċ

TECH N I C AL WH ITE PAPE R / 17

VMware vSphere Distributed Switch Best Practices

/ƫ5+1ƫ"+((+3ƫ0$!ƫ 2,+.0#.+1,/ƫ+*ü#1.0%+*ƫ%*ƫ(!ƫąČƫ5+1ƫ*ƫ/!!ƫ0$0ƫ!$ƫ0.þƫ05,!ƫ$/ƫ((ƫ 21,(%*'/ƫ 0%2!ƫ* ƫ0$0ƫ0$!/!ƫ(%*'/ƫ.!ƫ10%(%6! ƫ0$.+1#$ƫ0$!ƫ ƫ(#+.%0$)ċƫ !0Ě/ƫ*+3ƫ(++'ƫ0ƫ0$!ƫ ƫ+*ü#1.0%+*ƫ described in the last two columns of Table 4. $!ƫ ƫ+*ü#1.0%+*ƫ%*ƫ0$%/ƫ !/%#*ƫ$!(,/ƫ,.+2% !ƫ0$!ƫ,,.+,.%0!ƫ ĥƫ.!/+1.!/ƫ0+ƫ0$!ƫ %û!.!*0ƫ0.þƫ05,!/ċƫ /! ƫ+*ƫ0$!ƫ,.!2%+1/(5ƫ!/0%)0! ƫ* 3% 0$ƫ*1)!./ƫ,!.ƫ0.þƫ05,!Čƫ0$!ƫ/$.!/ƫ,.)!0!.ƫ%/ƫ+*ü#1.! ƫ%*ƫ 0$!ƫ ƫ/$.!/ƫ+(1)*ƫ%*ƫ(!ƫąċƫ$!ƫ/$.!/ƫ2(1!/ƫ/,!%"5ƫ0$!ƫ.!(0%2!ƫ%),+.0*!ƫ+"ƫ/,!%üƫ0.þƫ05,!/Čƫ * ƫ ƫ!*/1.!/ƫ0$0ƫ 1.%*#ƫ+*0!*0%+*ƫ/!*.%+/ƫ+*ƫ0$!ƫ 21,(%*'/Čƫ!$ƫ0.þƫ05,!ƫ#!0/ƫ0$!ƫ((+0! ƫ bandwidth. For example, a shares configuration of 10 for vMotion, iSCSI and FT allocates equal bandwidth to these traffic types. Virtual machines get the highest bandwidth with 20 shares and management gets lower bandwidth with 5 shares. +ƫ%((1/0.0!ƫ$+3ƫ/$.!ƫ2(1!/ƫ0.*/(0!ƫ0+ƫ* 3% 0$ƫ*1)!./Čƫ(!0Ě/ƫ0'!ƫ*ƫ!4),(!ƫ+"ƫāƫ,%05ƫ 21,(%*'ƫ carrying all five traffic types. This is a worst-case scenario where all traffic types are mapped to one dvuplink. $%/ƫ3%((ƫ*!2!.ƫ$,,!*ƫ3$!*ƫ1/0+)!./ƫ!*(!ƫ0$!ƫ ƫ"!01.!Čƫ!1/!ƫ ƫ3%((ƫ(*!ƫ0$!ƫ0.þƫ/! ƫ+*ƫ the utilization of uplinks. This example shows how much bandwidth each traffic type will be allowed on one 21,(%*'ƫ 1.%*#ƫƫ+*0!*0%+*ƫ+.ƫ+2!./1/.%,0%+*ƫ/!*.%+ƫ* ƫ3$!*ƫ ƫ%/ƫ*+0ƫ!*(! ċ đƫ+0(ƫ/$.!/čƫ)*#!)!*0ƫĨĆĩƫŐƫ2 +0%+*ƫĨāĀĩƫŐƫƫĨāĀĩƫŐƫ% ƫĨāĀĩƫŐƫ2%.01(ƫ)$%*!ƫĨĂĀĩƫœƫĆĆ đƫ *#!)!*0čƫĆƫ/$.!/ĎƫĨĆĥĆĆĩƫĵƫāƫœƫĊĀċĊā ,/ đƫ2 +0%+*čƫāĀƫ/$.!/ĎƫĨāĀĥĆĆĩƫĵƫāƫœƫāĉāċāĉ ,/ đƫčƫāĀƫ/$.!/ĎƫĨāĀĥĆĆĩƫĵƫāƫœƫāĉāċāĉ ,/ đƫ% čƫāĀƫ/$.!/ĎƫĨāĀĥĆĆĩƫĵƫāƫœƫāĉāċāĉ ,/ đƫ%.01(ƫ)$%*!čƫĂĀƫ/$.!/ĎƫĨĂĀĥĆĆĩƫĵƫāƫœƫăćăċćą ,/ To calculate the bandwidth numbers during contention, you should first calculate the percentage of bandwidth for a traffic type by dividing its share value by the total available share number (55). In the second step, the total * 3% 0$ƫ+"ƫ0$!ƫ 21,(%*'ƫĨāĩƫ%/ƫ)1(0%,(%! ƫ3%0$ƫ0$!ƫ,!.!*0#!ƫ+"ƫ* 3% 0$ƫ*1)!.ƫ(1(0! ƫ%*ƫ0$!ƫü./0ƫ step. For example, 5 shares allocated to management traffic translate to 90.91Mbps of bandwidth to )*#!)!*0ƫ,.+!//ƫ+*ƫƫ"1((5ƫ10%(%6! ƫāƫ*!03+.'ƫ ,0+.ċƫ *ƫ0$%/ƫ!4),(!Čƫ1/0+)ƫ/$.!ƫ+*ü#1.0%+*ƫ%/ƫ discussed, but a customer can make use of predefined high (100), normal (50) and low (25) shares when assigning them to different traffic types. The vSphere platform takes these configured share values and applies them per uplink. The schedulers running at each uplink are responsible for making sure that the bandwidth resources are allocated according to the /$.!/ċƫ *ƫ0$!ƫ/!ƫ+"ƫ*ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+.ƫ !,(+5)!*0Čƫ0$!.!ƫ.!ƫ!%#$0ƫ/$! 1(!./ƫ.1**%*#ċƫ!,!* %*#ƫ on the number of traffic types scheduled on a particular uplink, the scheduler will divide the bandwidth among the traffic types, based on the share numbers. For example, if only FT (10 shares) and management (5 shares) traffic are flowing through dvuplink 5, FT traffic will get double the bandwidth of management traffic, based on 0$!ƫ/$.!/ƫ2(1!ċƫ(/+Čƫ3$!*ƫ0$!.!ƫ%/ƫ*+ƫ)*#!)!*0ƫ0.þƫý+3%*#Čƫ((ƫ* 3% 0$ƫ*ƫ!ƫ10%(%6! ƫ5ƫ0$!ƫƫ ,.+!//ċƫ$%/ƫý!4%%(%05ƫ%*ƫ((+0%*#ƫ ĥƫ.!/+1.!/ƫ%/ƫ0$!ƫ'!5ƫ!*!ü0ƫ+"ƫ0$!ƫ ƫ"!01.!ċ $!ƫ ƫ(%)%0/ƫ,.)!0!.ƫ+"ƫ(!ƫąƫ%/ƫ*+0ƫ+*ü#1.! ƫ%*ƫ0$%/ƫ !/%#*ċƫ$!ƫ(%)%0/ƫ2(1!ƫ/,!%ü!/ƫ*ƫ/+(10!ƫ maximum limit on egress traffic for a traffic type. Limits are specified in Mbps. This configuration provides a hard (%)%0ƫ+*ƫ*5ƫ0.þČƫ!2!*ƫ%"ƫ ĥƫ.!/+1.!/ƫ.!ƫ2%((!ƫ0+ƫ1/!ċƫ/%*#ƫ(%)%0/ƫ+*ü#1.0%+*ƫ%/ƫ*+0ƫ.!+))!* ! ƫ unless you really want to control the traffic, even though additional resources are available. $!.!ƫ%/ƫ*+ƫ$*#!ƫ%*ƫ,$5/%(ƫ/3%0$ƫ+*ü#1.0%+*ƫ%*ƫ0$%/ƫ !/%#*ƫ,,.+$Čƫ!2!*ƫ3%0$ƫ0$!ƫ$+%!ƫ+"ƫ0$!ƫ*!3ƫ ƫ (#+.%0$)ċƫ$!ƫ ƫ0!)%*#ƫ(#+.%0$)ƫ +!/*Ě0ƫ.!-1%.!ƫ*5ƫ/,!%(ƫ+*ü#1.0%+*ƫ+*ƫ,$5/%(ƫ/3%0$!/ċƫ!"!.ƫ0+ƫ 0$!ƫ,$5/%(ƫ/3%0$ƫ/!00%*#/ƫ !/.%! ƫ%*ƫė!/%#*ƫ,0%+*ƫāċĘ

TECH N I C AL WH ITE PAPE R / 1 8

VMware vSphere Distributed Switch Best Practices

TRAFFIC TYPE

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

NIOC SHARES

NIOC LIMITS

M A N AG E M E N T

ġ



1, 2, 3, 4, ĆČƫćČƫĈČƫĉ

+*!

5

Ģ

v M OT I O N

ġ



1, 2, 3, 4, ĆČƫćČƫĈČƫĉ

+*!

10

Ģ

FT

ġ



1, 2, 3, 4, ĆČƫćČƫĈČƫĉ

+*!

10

Ģ

ISCSI

ġ



1, 2, 3, 4, ĆČƫćČƫĈČƫĉ

+*!

10

Ģ

V I R T UA L M AC H I N E

ġ



1, 2, 3, 4, ĆČƫćČƫĈČƫĉ

+*!

20

Ģ

Table 4. Dynamic Design Configuration with NIOC and LBT

$%/ƫ !/%#*ƫ +!/ƫ*+0ƫ,.+2% !ƫ$%#$!.ƫ0$*ƫāƫ* 3% 0$ƫ0+ƫ0$!ƫ2 +0%+*ƫ* ƫ% ƫ0.þƫ05,!/ƫ/ƫ%/ƫ0$!ƫ/!ƫ 3%0$ƫ/00%ƫ !/%#*ƫ1/%*#ƫ)1(0%Ģ*!03+.'ƫ ,0+.ƫ2 +0%+*ƫ* ƫ% ƫ)1(0%,0$%*#ċƫ$!ƫ ƫ(#+.%0$)ƫ**+0ƫ/,(%0ƫ the infrastructure traffic across multiple dvuplink ports and utilize all the links. So even if vMotion dvportgroup ġƫ$/ƫ((ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ƫ/ƫ0%2!ƫ1,(%*'/Čƫ2 +0%+*ƫ0.þƫ3%((ƫ!ƫ..%! ƫ+2!.ƫ+*(5ƫ+*!ƫ+"ƫ the eight uplinks. The main advantage of this design is evident in the scenarios where the vMotion process is not using the uplink bandwidth, and other traffic types are in need of the additional resources. In these situations,  ƫ)'!/ƫ/1.!ƫ0$0ƫ0$!ƫ1*1/! ƫ* 3% 0$ƫ%/ƫ((+0! ƫ0+ƫ0$!ƫ+0$!.ƫ0.þƫ05,!/ƫ0$0ƫ*!! ƫ%0ċ This dynamic design option is the recommended approach because it takes advantage of the advanced VDS "!01.!/ƫ* ƫ10%(%6!/ƫ ĥƫ.!/+1.!/ƫ!þ%!*0(5ċƫ$%/ƫ+,0%+*ƫ(/+ƫ,.+2% !/ƫ0%2!Ģ0%2!ƫ.!/%(%!*5ƫ3$!.!ƫ*+ƫ1,(%*'/ƫ are in standby mode. In this design approach, customers allow the vSphere platform to make the optimal decisions on scheduling traffic across multiple uplinks. Some customers who have restrictions in the physical infrastructure in terms of bandwidth capacity across different paths and limited availability of the layer 2 domain might not be able to take advantage of this dynamic design option. When deploying this design option, it is important to consider all the different traffic paths that a traffic type can take and to make sure that the physical switch infrastructure can support the specific characteristics required for each traffic type. VMware recommends that vSphere and network administrators work together to understand the impact of the vSphere platform’s traffic scheduling feature over the physical network infrastructure before deploying this design option. 2!.5ƫ1/0+)!.ƫ!*2%.+*)!*0ƫ%/ƫ %û!.!*0Čƫ* ƫ0$!ƫ.!-1%.!)!*0/ƫ"+.ƫ0$!ƫ0.þƫ05,!/ƫ.!ƫ(/+ƫ %û!.!*0ċƫ!,!* %*#ƫ on the need of the environment, a customer can modify these design options to fit their specific requirements. For example, customers can choose to use a combination of static and dynamic design options when they need higher bandwidth for iSCSI and vMotion activities. In this hybrid design, four uplinks can be statically allocated to iSCSI and vMotion traffic types while the remaining four uplinks are used dynamically for the remaining traffic 05,!/ċƫ(!ƫĆƫ/$+3/ƫ0$!ƫ0.þƫ05,!/ƫ* ƫ//+%0! ƫ,+.0ƫ#.+1,ƫ+*ü#1.0%+*/ƫ"+.ƫ0$!ƫ$5.% ƫ !/%#*ċƫ/ƫ/$+3*ƫ in the table, management, FT and virtual machine traffic will be distributed on dvuplink1 to dvuplink4 through 0$!ƫ2,$!.!ƫ,(0"+.)Ě/ƫ0.þƫ/$! 1(%*#ƫ"!01.!/Čƫ ƫ* ƫ ċƫ$!ƫ.!)%*%*#ƫ"+1.ƫ 21,(%*'/ƫ.!ƫ/00%((5ƫ assigned to vMotion and iSCSI traffic types.

TECH N I C AL WH ITE PAPE R / 1 9

VMware vSphere Distributed Switch Best Practices

TRAFFIC TYPE

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

NIOC SHARES

NIOC LIMITS

M A N AG E M E N T

ġ



1, 2, 3, 4

+*!

5

Ģ

v M OT I O N

ġā

+*!

5

6

-

Ģ

v M OT I O N

ġĂ

+*!

6

5

-

Ģ

FT

ġ



1, 2, 3, 4

+*!

10

Ģ

ISCSI

ġā

+*!

7

+*!

-

Ģ

ISCSI

ġĂ

+*!

ĉ

+*!

-

Ģ

V I R T UA L M AC H I N E

ġ



1, 2, 3, 4

+*!

20

Ģ

Table 5. Hybrid Design Configuration

Rack Server with Two 10GbE Network Adaptors $!ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ƫ !,(+5)!*0ƫ)+ !(ƫ%/ƫ!+)%*#ƫ2!.5ƫ+))+*ƫ!1/!ƫ+"ƫ0$!ƫ!*!ü0/ƫ0$!5ƫ provide through I/O consolidation. The key benefits include better utilization of I/O resources, simplified )*#!)!*0ƫ* ƫ.! 1! ƫƫ* ƫċƫ(0$+1#$ƫ0$%/ƫ !,(+5)!*0ƫ,.+2% !/ƫ0$!/!ƫ!*!ü0/Čƫ0$!.!ƫ.!ƫ/+)!ƫ $((!*#!/ƫ3$!*ƫ%0ƫ+)!/ƫ0+ƫ0$!ƫ0.þƫ)*#!)!*0ƫ/,!0/ċƫ/,!%((5ƫ%*ƫ$%#$(5ƫ+*/+(% 0! ƫ2%.01(%6! ƫ !*2%.+*)!*0/ƫ3$!.!ƫ)+.!ƫ0.þƫ05,!/ƫ.!ƫ..%! ƫ+2!.ƫ"!3!.ƫāĀƫ*!03+.'ƫ ,0+./Čƫ%0ƫ!+)!/ƫ.%0%(ƫ0+ƫ ,.%+.%0%6!ƫ0.þƫ05,!/ƫ0$0ƫ.!ƫ%),+.0*0ƫ* ƫ,.+2% !ƫ0$!ƫ.!-1%.! ƫ ƫ#1.*0!!/ċƫ$!ƫ ƫ"!01.!ƫ2%((!ƫ on the VDS helps in this traffic management activity. In the following sections, you will see how to utilize this feature in the different designs. /ƫ/$+3*ƫ%*ƫ%#1.!ƫĆČƫ.'ƫ/!.2!./ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ƫ.!ƫ+**!0! ƫ0+ƫ0$!ƫ03+ƫ!//ƫ(5!.ƫ /3%0$!/ƫ0+ƫ2+% ƫ*5ƫ/%*#(!ƫ,+%*0ƫ+"ƫ"%(1.!ċƫ%)%(.ƫ0+ƫ0$!ƫ.'ƫ/!.2!.ƫ3%0$ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./Čƫ0$!ƫ different VDS and physical switch parameter configurations are taken into account with this design. On the ,$5/%(ƫ/3%0$ƫ/% !Čƫ0$!ƫ*!3ƫāĀƫ/3%0$!/ƫ)%#$0ƫ$2!ƫ/1,,+.0ƫ"+.ƫ+ƫ0$0ƫ!*(!/ƫ+*2!.#!*!ƫ"+.ƫƫ * ƫ ƫ0.þċƫ$%/ƫ +1)!*0ƫ+2!./ƫ+*(5ƫ0$!ƫ/0* . ƫāĀƫ !,(+5)!*0/ƫ0$0ƫ/1,,+.0ƫ ƫ/0+.#!ƫ0.þƫ Ĩ% ĥĩƫ* ƫ*+0ƫ+ċ

TECH N I C AL WH ITE PAPE R / 20

VMware vSphere Distributed Switch Best Practices

*ƫ0$%/ƫ/!0%+*Čƫ03+ƫ !/%#*ƫ+,0%+*/ƫ.!ƫ !/.%! Ďƫ+*!ƫ%/ƫƫ0. %0%+*(ƫ,,.+$ƫ* ƫ0$!ƫ+0$!.ƫ+*!ƫ%/ƫƫ 3.!ġ recommended approach. Cluster 1 VM

VM

VM

VM

VM

VM

VM

VM

Cluster 2 VM

VM

VM

VM

VM

VM

VM

VM

vSphere Distributed Switch ESXi

ESXi

ESXi

ESXi

......................

Access Layer

Legend: Aggregation Layer

PG-A PG-B

Figure 5. Rack Server with Two 10GbE Network Adaptors

Design Option 1 – Static Configuration $!ƫ/00%ƫ+*üƫ#1.0%+*ƫ,,.+$ƫ"+.ƫ.'ƫ/!.2!.ƫ !,(+5)!*0ƫ3%0$ƫāĀƫ*!03+.'ƫ ,0+./ƫ%/ƫ/%)%(.ƫ0+ƫ0$!ƫ+*!ƫ !/.%! ƫ%*ƫė!/%#*ƫ,0%+*ƫāĘƫ+"ƫ.'ƫ/!.2!.ƫ !,(+5)!*0ƫ3%0$ƫ!%#$0ƫāƫ ,0+./ċƫ$!.!ƫ.!ƫƫ"!3ƫ %ûƫ!.!*!/ƫ in the configuration where the numbers of dvuplinks are changed from eight to two, and dvportgroup parameters are different. Let’s take a look at the configuration details on the VDS front. dvuplink Configuration +ƫ/1,,+.0ƫ0$!ƫ)4%)1)ƫ03+ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ,!.ƫ$+/0Čƫ0$!ƫ 21,(%*'ƫ,+.0ƫ#.+1,ƫ%/ƫ+*üƫ#1.! ƫ3%0$ƫ two dvuplinks (dvuplink1, dvuplink2). On the hosts, dvuplink1 is associated with vmnic0 and dvuplink2 is associated with vmnic1. dvportgroup Configuration /ƫ !/.%! ƫ%*ƫ(!ƫćČƫ0$!.!ƫ.!ƫüƫ2!ƫ %ûƫ!.!*0ƫ 2,+.0#.+1,/ƫ0$0ƫ.!ƫ+*üƫ#1.! ƫ"+.ƫ0$!ƫüƫ2!ƫ %ûƫ!.!*0ƫ0.þ ƫƫƫ 05,!/ċƫ+.ƫ!4),(!Čƫ 2,+.0#.+1,ƫġƫ%/ƫ.!0! ƫ"+.ƫ0$!ƫ)*#!)!*0ƫ0.þ ƫƫƫ05,!ċƫ$!ƫ"+((+3%*#ƫ.!ƫ0$!ƫ+0$!.ƫ '!5ƫ+*üƫ#1.0%+*/ƫ+"ƫ 2,+.0#.+1,ƫġč đƫ!)%*#ƫ+,0%+*čƫ*ƫ!4,(%%0ƫ"%(+2!.ƫ+. !.ƫ,.+2% !/ƫƫ !0!.)%*%/0%ƫ35ƫ+"ƫ %.!0%*#ƫ0.þ ƫƫƫ0+ƫƫ,.0%1(.ƫ1,(%*'ċƫ 5ƫ/!(!0%*#ƫ 21,(%*'āƫ/ƫ*ƫ0%2!ƫ1,(%*'ƫ* ƫ 21,(%*'Ăƫ/ƫƫ/0* 5ƫ1,(%*'Čƫ)*#!)!*0ƫ0.þ ƫƫƫ3%((ƫ!ƫ..%! ƫ +2!.ƫ 21,(%*'āƫ1*(!//ƫ0$!.!ƫ%/ƫƫ"%(1.!ƫ3%0$ƫ%0ċƫ+*üƫ#1.%*#ƫ0$!ƫ"%('ƫ+,0%+*ƫ0+ƫė+Ęƫ%/ƫ(/+ƫ.!+))!* ! Čƫ0+ƫ avoid the flapping of traffic between two network adaptors. The failback option determines how a physical  ,0+.ƫ%/ƫ.!01.*! ƫ0+ƫ0%2!ƫ 105ƫ"0!.ƫ.!+2!.%*#ƫ".+)ƫƫ"%(1.!ċƫ "ƫ"%('ƫ%/ƫ/!0ƫ0+ƫė+ČĘƫƫ"%(! ƫ ,0+.ƫ%/ƫ(!"0ƫ inactive, even after recovery, until another currently active adaptor fails, requiring its replacement. đƫ 3.!ƫ.!+))!* /ƫ%/+(0%*#ƫ((ƫ0.þ ƫƫƫ05,!/ƫ".+)ƫ!$ƫ+0$!.ƫ5ƫ !üƫ*%*#ƫƫ/!,.0!ƫ ƫ"+.ƫ each dvportgroup.

TECH N I C AL WH ITE PAPE R / 21

VMware vSphere Distributed Switch Best Practices

đƫ$!.!ƫ.!ƫ2.%+1/ƫ+0$!.ƫ,.)!0!./ƫ0$0ƫ.!ƫ,.0ƫ+"ƫ0$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ċƫ1/0+)!./ƫ*ƫ$++/!ƫ0+ƫ configure these parameters based on their environment needs. (!ƫćƫ,.+2% !/ƫ0$!ƫ+*ü#1.0%+*ƫ !0%(/ƫ"+.ƫ((ƫ0$!ƫ 2,+.0#.+1,/ċƫ+. %*#ƫ0+ƫ0$!ƫ+*ü#1.0%+*Čƫ 21,(%*'āƫ ..%!/ƫ)*#!)!*0Čƫ% ƫ* ƫ2%.01(ƫ)$%*!ƫ0.þĎƫ 21,(%*'Ăƫ$* (!/ƫ2 +0%+*Čƫƫ* ƫ2%.01(ƫ)$%*!ƫ 0.þċƫ/ƫ5+1ƫ*ƫ/!!Čƫ0$!ƫ2%.01(ƫ)$%*!ƫ0.þƫ05,!ƫ)'!/ƫ1/!ƫ+"ƫ03+ƫ1,(%*'/Čƫ* ƫ0$!/!ƫ1,(%*'/ƫ.!ƫ10%(%6! ƫ 0$.+1#$ƫ0$!ƫ ƫ(#+.%0$)ċ With this deterministic teaming policy, customers can decide to map different traffic types to the available uplink ports, depending on environment needs. For example, if iSCSI traffic needs higher bandwidth and other traffic types have relatively low bandwidth requirements, customers can decide to keep only iSCSI traffic on dvuplink1 and move all other traffic to dvuplink2. When deciding on these traffic paths, customers should understand the physical network connectivity and the paths’ bandwidth capacities. Physical Switch Configuration The external physical switch, which the rack servers’ network adaptors are connected to, has trunk configuration 3%0$ƫ((ƫ0$!ƫ,,.+,.%0!ƫ /ƫ!*(! ċƫ/ƫ !/.%! ƫ%*ƫ0$!ƫ,$5/%(ƫ*!03+.'ƫ/3%0$ƫ,.)!0!./ƫ/!0%+*/Čƫ0$!ƫ following switch configurations are performed based on the VDS setup described in Table 6. đƫ*(!ƫƫ+*ƫ0$!ƫ0.1*'ƫ,+.0/ƫ"%*#ƫ%ƫ$+/0/Čƫ(+*#ƫ3%0$ƫ0$!ƫ+.0/0ƫ)+ !ƫ* ƫƫ#1. ƫ"!01.!ċ đƫ$!ƫ0!)%*#ƫ+*ü#1.0%+*ƫ+*ƫƫ%/ƫ/00%ƫ* ƫ0$!.!"+.!ƫ*+ƫ(%*'ƫ##.!#0%+*ƫ%/ƫ+*ü#1.! ƫ+*ƫ0$!ƫ,$5/%(ƫ switches. đƫ!1/!ƫ+"ƫ0$!ƫ)!/$ƫ0+,+(+#5ƫ !,(+5)!*0ƫ/$+3*ƫ%*ƫ%#1.!ƫĆČƫ0$!ƫ(%*'ƫ/00!Ģ0.'%*#ƫ"!01.!ƫ%/ƫ*+0ƫ.!-1%.! ƫ+*ƫ the physical switches. TRAFFIC TYPE

M A N AG E M E N T

v M OT I O N

FT

ISCSI

V I R T UA L M AC H I N E

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

UNUSED UPLINK

ġ

4,(%%0ƫ Failover

dvuplink1

dvuplink2

+*!

ġ

4,(%%0ƫ Failover

dvuplink2

dvuplink1

+*!

ġ

4,(%%0ƫ Failover

dvuplink2

dvuplink1

+*!

ġ

4,(%%0ƫ Failover

dvuplink1

dvuplink2

+*!

ġ



dvuplink1/ dvuplink2

+*!

+*!

Table 6. Static Design Configuration

This static design option provides flexibility in the traffic path configuration, but it cannot protect against one traffic type’s dominating others. For example, there is a possibility that a network-intensive vMotion process )%#$0ƫ0'!ƫ35ƫ)+/0ƫ+"ƫ0$!ƫ*!03+.'ƫ* 3% 0$ƫ* ƫ%),0ƫ2%.01(ƫ)$%*!ƫ0.þċƫ% %.!0%+*(ƫ0.þġ/$,%*#ƫ ,.)!0!./ƫ0ƫ,+.0ƫ#.+1,ƫ* ƫ,+.0ƫ(!2!(/ƫ*ƫ,.+2% !ƫ/+)!ƫ$!(,ƫ%*ƫ)*#%*#ƫ %û!.!*0ƫ0.þƫ.0!/ċƫ+3!2!.Čƫ using this approach for traffic management requires customers to limit the traffic on the respective dvportgroups. Limiting traffic to a certain level through this method puts a hard limit on the traffic types, even when the bandwidth is available to utilize. This underutilization of I/O resources because of hard limits is overcome 0$.+1#$ƫ0$!ƫ ƫ"!01.!Čƫ3$%$ƫ,.+2% !/ƫý!4%(!ƫ0.þƫ)*#!)!*0ƫ/! ƫ+*ƫ0$!ƫ/$.!/ƫ,.)!0!./ċƫ ė!/%#*ƫ,0%+*ƫĂČĘƫ !/.%! ƫ%*ƫ0$!ƫ"+((+3%*#ƫ/!0%+*Čƫ%/ƫ/! ƫ+*ƫ0$!ƫ ƫ"!01.!ċ

TECH N I C AL WH ITE PAPE R / 2 2

VMware vSphere Distributed Switch Best Practices

Design Option 2 – Dynamic Configuration with NIOC and LBT $%/ƫ 5*)%ƫ !/%#*ƫ+,0%+*ƫ%/ƫ0$!ƫ 3.!ġ.!+))!* ! ƫ,,.+$ƫ0$0ƫ0'!/ƫ 2*0#!ƫ+"ƫ0$!ƫ ƫ* ƫ ƫ features of the VDS. +**!0%2%05ƫ0+ƫ0$!ƫ,$5/%(ƫ*!03+.'ƫ%*"./0.101.!ƫ.!)%*/ƫ0$!ƫ/)!ƫ/ƫ0$0ƫ !/.%! ƫ%*ƫė!/%#*ƫ,0%+*ƫāċĘƫ +3!2!.Čƫ%*/0! ƫ+"ƫ((+0%*#ƫ/,!%üƫ 21,(%*'/ƫ0+ƫ%* %2% 1(ƫ0.þƫ05,!/Čƫ0$!ƫ%ƫ,(0"+.)ƫ10%(%6!/ƫ0$+/!ƫ dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure traffic type’s bandwidth utilization is estimated. In a real deployment, customers should first monitor the virtual infrastructure traffic over a period of time to gauge the bandwidth utilization, and then come up with bandwidth numbers. The following are some bandwidth numbers estimated by traffic type: đƫ *#!)!*0ƫ0.þƫĨŕāĩ đƫ2 +0%+*ƫĨĂĩ đƫƫĨāĩ đƫ% ƫĨĂĩ đƫ%.01(ƫ)$%*!ƫĨĂĩ $!/!ƫ* 3% 0$ƫ!/0%)0!/ƫ.!ƫ %û!.!*0ƫ".+)ƫ0$!ƫ+*!ƫ+*/% !.! ƫ3%0$ƫ.'ƫ/!.2!.ƫ !,(+5)!*0ƫ3%0$ƫ!%#$0ƫāƫ network adaptors. Let’s take a look at the VDS parameter configurations for this design. The dvuplink port group +*ü#1.0%+*ƫ.!)%*/ƫ0$!ƫ/)!Čƫ3%0$ƫ03+ƫ 21,(%*'/ƫ.!0! ƫ"+.ƫ0$!ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ċƫ$!ƫ dvportgroup configuration is as follows. dvportgroup Configuration *ƫ0$%/ƫ !/%#*Čƫ((ƫ 21,(%*'/ƫ.!ƫ0%2!ƫ* ƫ0$!.!ƫ.!ƫ*+ƫ/0* 5ƫ* ƫ1*1/! ƫ1,(%*'/Čƫ/ƫ/$+3*ƫ%*ƫ(!ƫĈċƫ((ƫ dvuplinks are therefore available for use by the teaming algorithm. The following are the key configurations of 2,+.0#.+1,ƫġč đƫ!)%*#ƫ+,0%+*čƫ ƫ%/ƫ/!(!0! ƫ/ƫ0$!ƫ0!)%*#ƫ(#+.%0$)ċƫ%0$ƫ ƫ+*ü#1.0%+*Čƫ)*#!)!*0ƫ0.þƫ%*%0%((5ƫ 3%((ƫ!ƫ/$! 1(! ƫ/! ƫ+*ƫ0$!ƫ2%.01(ƫ,+.0ƫ ƫ$/$ċƫ/! ƫ+*ƫ0$!ƫ$/$ƫ+10,10Čƫ)*#!)!*0ƫ0.þƫ3%((ƫ!ƫ/!*0ƫ out over one of the dvuplinks. Other traffic types in the virtual infrastructure can also be scheduled on the /)!ƫ 21,(%*'ƫ3%0$ƫ ƫ+*ü#1.0%+*ċƫ1/!-1!*0(5Čƫ%"ƫ0$!ƫ10%(%60%+*ƫ+"ƫ0$!ƫ1,(%*'ƫ#+!/ƫ!5+* ƫ0$!ƫĈĆƫ,!.!*0ƫ 0$.!/$+( Čƫ0$!ƫ ƫ(#+.%0$)ƫ3%((ƫ!ƫ%*2+'! ƫ* ƫ/+)!ƫ+"ƫ0$!ƫ0.þƫ3%((ƫ!ƫ)+2! ƫ0+ƫ+0$!.ƫ1* !.10%(%6! ƫ dvuplinks. It is possible that management traffic will get moved to other dvuplinks when such an event occurs. đƫ$!.!ƫ.!ƫ*+ƫ/0* 5ƫ 21,(%*'/ƫ%*ƫ0$%/ƫ+*ü#1.0%+*Čƫ/+ƫ0$!ƫ"%('ƫ/!00%*#ƫ%/ƫ*+0ƫ,,(%(!ƫ"+.ƫ0$%/ƫ !/%#*ƫ ,,.+$ċƫ$!ƫ !"1(0ƫ/!00%*#ƫ"+.ƫ0$%/ƫ"%('ƫ+,0%+*ƫ%/ƫė!/ċĘ đƫ 3.!ƫ.!+))!* /ƫ%/+(0%*#ƫ((ƫ0.þƫ05,!/ƫ".+)ƫ!$ƫ+0$!.ƫ5ƫ !ü*%*#ƫƫ/!,.0!ƫ ƫ"+.ƫ!$ƫ dvportgroup. đƫ$!.!ƫ.!ƫ/!2!.(ƫ+0$!.ƫ,.)!0!./ƫ0$0ƫ.!ƫ,.0ƫ+"ƫ0$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ċƫ1/0+)!./ƫ*ƫ$++/!ƫ0+ƫ configure these parameters based on their environment needs. /ƫ5+1ƫ"+((+3ƫ0$!ƫ 2,+.0#.+1,/ƫ+*ü#1.0%+*ƫ%*ƫ(!ƫĈČƫ5+1ƫ*ƫ/!!ƫ0$0ƫ!$ƫ0.þƫ05,!ƫ$/ƫ((ƫ0$!ƫ 21,(%*'/ƫ /ƫ0%2!ƫ* ƫ0$!/!ƫ1,(%*'/ƫ.!ƫ10%(%6! ƫ0$.+1#$ƫ0$!ƫ ƫ(#+.%0$)ċƫ !0Ě/ƫ0'!ƫƫ(++'ƫ0ƫ0$!ƫ ƫ+*ü#1.0%+*ċ $!ƫ ƫ+*ü#1.0%+*ƫ%*ƫ0$%/ƫ !/%#*ƫ*+0ƫ+*(5ƫ$!(,/ƫ,.+2% !ƫ0$!ƫ,,.+,.%0!ƫ ĥƫ.!/+1.!/ƫ0+ƫ0$!ƫ %û!.!*0ƫ0.þƫ 05,!/ƫ10ƫ(/+ƫ,.+2% !/ƫ ƫ#1.*0!!/ƫ5ƫ,.!2!*0%*#ƫ+*!ƫ0.þƫ05,!ƫ".+)ƫ +)%*0%*#ƫ+0$!./ċ /! ƫ+*ƫ0$!ƫ* 3% 0$ƫ//1),0%+*/ƫ) !ƫ"+.ƫ %û!.!*0ƫ0.þƫ05,!/Čƫ0$!ƫ/$.!/ƫ,.)!0!./ƫ.!ƫ+*ü#1.! ƫ%*ƫ 0$!ƫ ƫ/$.!/ƫ+(1)*ƫ%*ƫ(!ƫĈċƫ+ƫ%((1/0.0!ƫ$+3ƫ/$.!ƫ2(1!/ƫ0.*/(0!ƫ0+ƫ* 3% 0$ƫ*1)!./ƫ%*ƫ0$%/ƫ !,(+5)!*0Čƫ(!0Ě/ƫ0'!ƫ*ƫ!4),(!ƫ+"ƫƫāĀƫ,%05ƫ 21,(%*'ƫ..5%*#ƫ((ƫü2!ƫ0.þƫ05,!/ċƫ$%/ƫ%/ƫƫ3+./0ġ/!ƫ scenario in which all traffic types are mapped to one dvuplink. This will never happen when customers enable 0$!ƫ ƫ"!01.!Čƫ!1/!ƫ ƫ3%((ƫ)+2!ƫ0$!ƫ0.þƫ05,!ƫ/! ƫ+*ƫ0$!ƫ1,(%*'ƫ10%(%60%+*ċƫ

TECH N I C AL WH ITE PAPE R / 2 3

VMware vSphere Distributed Switch Best Practices

The following example shows how much bandwidth each traffic type will be allowed on one dvuplink during a +*0!*0%+*ƫ+.ƫ+2!./1/.%,0%+*ƫ/!*.%+ƫ* ƫ3$!*ƫ ƫ%/ƫ*+0ƫ!*(! č đƫ+0(ƫ/$.!/čƫ)*#!)!*0ƫĨĆĩƫŐƫ2 +0%+*ƫĨĂĀĩƫŐƫƫĨāĀĩƫŐƫ% ƫĨĂĀĩƫŐƫ2%.01(ƫ)$%*!ƫĨĂĀĩƫœƫĈĆ đƫ *#!)!*0čƫĆƫ/$.!/ĎƫĨĆĥĈĆĩƫĵƫāĀƫœƫććĈ ,/ đƫ2 +0%+*čƫĂĀƫ/$.!/ĎƫĨĂĀĥĈĆĩƫĵƫāĀƫœƫĂċćĈ,/ đƫčƫāĀƫ/$.!/ĎƫĨāĀĥĈĆĩƫĵƫāĀƫœƫāċăă,/ đƫ% čƫĂĀƫ/$.!/ĎƫĨĂĀĥĈĆĩƫĵƫāĀƫœƫĂċćĈ,/ đƫ%.01(ƫ)$%*!čƫĂĀƫ/$.!/ĎƫĨĂĀĥĈĆĩƫĵƫāĀƫœƫĂċćĈ,/ For each traffic type, first the percentage of bandwidth is calculated by dividing the share value by the total 2%((!ƫ/$.!ƫ*1)!.ƫĨĈĆĩČƫ* ƫ0$!*ƫ0$!ƫ0+0(ƫ* 3% 0$ƫ+"ƫ0$!ƫ 21,(%*'ƫĨāĀĩƫ%/ƫ1/! ƫ0+ƫ(1(0!ƫ0$!ƫ * 3% 0$ƫ/$.!ƫ"+.ƫ0$!ƫ0.þƫ05,!ċƫ+.ƫ!4),(!ČƫĂĀƫ/$.!/ƫ((+0! ƫ0+ƫ2 +0%+*ƫ0.þƫ0.*/(0!ƫ0+ƫĂċćĈ,/ƫ +"ƫ* 3% 0$ƫ0+ƫ0$!ƫ2 +0%+*ƫ,.+!//ƫ+*ƫƫ"1((5ƫ10%(%6! ƫāĀƫ*!03+.'ƫ ,0+.ċ *ƫ0$%/ƫāĀƫ !,(+5)!*0Čƫ1/0+)!./ƫ*ƫ,.+2% !ƫ%##!.ƫ,%,!/ƫ0+ƫ%* %2% 1(ƫ0.þƫ05,!/ƫ3%0$+10ƫ0$!ƫ1/!ƫ+"ƫ 0.1*'%*#ƫ+.ƫ)1(0%,0$%*#ƫ0!$*+(+#%!/ċƫ$%/ƫ3/ƫ*+0ƫ0$!ƫ/!ƫ3%0$ƫ*ƫ!%#$0Ģāƫ !,(+5)!*0ċƫ There is no change in physical switch configuration in this design approach, so refer to the physical switch /!00%*#/ƫ !/.%! ƫ%*ƫė!/%#*ƫ,0%+*ƫāĘƫ%*ƫ0$!ƫ,.!2%+1/ƫ/!0%+*ċ

TRAFFIC TYPE

M A N AG E M E N T

v M OT I O N

FT

ISCSI

V I R T UA L M AC H I N E

PORT GROUP

TEAMING OPTION

AC T I V E UPLINK

S TA N D B Y UPLINK

NIOC SHARES

NIOC LIMITS

ġ



dvuplink1, 2

+*!

5

Ģ

ġ



dvuplink1, 2

+*!

20

Ģ

ġ



dvuplink1, 2

+*!

10

Ģ

ġ



dvuplink1, 2

+*!

20

Ģ

ġ



dvuplink1, 2

+*!

20

Ģ

Table 7. Dynamic Design Configuration

This design option utilizes the advanced VDS features and provides customers with a dynamic and flexible !/%#*ƫ,,.+$ċƫ *ƫ0$%/ƫ !/%#*Čƫ ĥƫ.!/+1.!/ƫ.!ƫ10%(%6! ƫ!û!0%2!(5ƫ* ƫ /ƫ.!ƫ)!0ƫ/! ƫ+*ƫ0$!ƫ shares allocation.

TECH N I C AL WH ITE PAPE R / 24

VMware vSphere Distributed Switch Best Practices

Blade Server in Example Deployment ( !ƫ/!.2!./ƫ.!ƫ/!.2!.ƫ,(0"+.)/ƫ0$0ƫ,.+2% !ƫ$%#$!.ƫ/!.2!.ƫ+*/+(% 0%+*ƫ,!.ƫ.'ƫ1*%0ƫ/ƫ3!((ƫ/ƫ(+3!.ƫ,+3!.ƫ * ƫ++(%*#ƫ+/0/ċƫ( !ƫ$//%/ƫ0$0ƫ$+/0ƫ0$!ƫ( !ƫ/!.2!./ƫ$2!ƫ,.+,.%!0.5ƫ.$%0!01.!/ƫ* ƫ!$ƫ2!* +.ƫ$/ƫ its own way of managing resources in the blade chassis. It is difficult in this document to look at all of the various blade chassis available on the market and to discuss their deployments. In this section, we will focus on some generic parameters that customers should consider when deploying VDS in a blade chassis environment. From a networking point of view, all blade chassis provide the following two options: đƫ *0!#.0! ƫ/3%0$!/čƫ%0$ƫ0$%/ƫ+,0%+*Čƫ0$!ƫ( !ƫ$//%/ƫ!*(!/ƫ1%(0ġ%*ƫ/3%0$!/ƫ0+ƫ+*0.+(ƫ0.þ ƫƫƫýƫ+3ƫ!03!!*ƫ the blade servers within the chassis and the external network. đƫ//ġ0$.+1#$ƫ0!$*+(+#5čƫ$%/ƫ%/ƫ*ƫ(0!.*0%2!ƫ)!0$+ ƫ+"ƫ*!03+.'ƫ+**!0%2%05ƫ0$0ƫ!*(!/ƫ0$!ƫ%* %2% 1(ƫ blade servers to communicate directly with the external network. *ƫ0$%/ƫ +1)!*0Čƫ0$!ƫ%*0!#.0! ƫ/3%0$ƫ+,0%+*ƫ%/ƫ !/.%! ƫ/ƫė3$!.!ƫ0$!ƫ( !ƫ$//%/ƫ$/ƫƫ1%(0ġ%*ƫ0$!.*!0ƫ /3%0$ċĘƫ$%/ƫ0$!.*!0ƫ/3%0$ƫ0/ƫ/ƫ*ƫ!//ƫ(5!.ƫ/3%0$Čƫ/ƫ/$+3*ƫ%*ƫ%#1.!ƫćċƫ $%/ƫ/!0%+*ƫ %/1//!/ƫƫ !,(+5)!*0ƫ%*ƫ3$%$ƫ0$!ƫ%ƫ$+/0ƫ%/ƫ.1**%*#ƫ+*ƫƫ( !ƫ/!.2!.ċƫ$!ƫ"+((+3%*#ƫ03+ƫ05,!/ƫ of blade server configuration will be described in the next section: đƫ( !ƫ/!.2!.ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ đƫ( !ƫ/!.2!.ƫ3%0$ƫ$. 3.!ġ//%/0! ƫ)1(0%,(!ƫ(+#%(ƫ*!03+.'ƫ ,0+./ For each of these two configurations, various VDS design approaches will be discussed.

Blade Server with Two 10GbE Network Adaptors $%/ƫ !,(+5)!*0ƫ%/ƫ-1%0!ƫ/%)%(.ƫ0+ƫ0$0ƫ+"ƫƫ.'ƫ/!.2!.ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ƫ%*ƫ3$%$ƫ!$ƫ%ƫ $+/0ƫ%/ƫ,.+2% ! ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ċƫ/ƫ/$+3*ƫ%*ƫ%#1.!ƫćČƫ*ƫ%ƫ$+/0ƫ.1**%*#ƫ+*ƫƫ( !ƫ/!.2!.ƫ %*ƫ0$!ƫ( !ƫ$//%/ƫ%/ƫ(/+ƫ,.+2% ! ƫ3%0$ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ċƫ

Cluster 1 VM

VM

VM

VM

VM

VM

VM

VM

Cluster 2 VM

VM

VM

VM

VM

VM

VM

VM

vSphere Distributed Switch ESXi

ESXi

ESXi

ESXi

Access Layer

Legend: Aggregation Layer

PG-A PG-B

Figure 6. Blade Server with Two 10GbE Network Adaptors

TECH N I C AL WH ITE PAPE R / 2 5

VMware vSphere Distributed Switch Best Practices

In this section, two design options are described. One is a traditional static approach and the other one is a  3.!ġ.!+))!* ! ƫ 5*)%ƫ+*ü#1.0%+*ƫ3%0$ƫ ƫ* ƫ ƫ"!01.!/ƫ!*(! ċƫ$!/!ƫ03+ƫ,,.+$!/ƫ.!ƫ !40(5ƫ0$!ƫ/)!ƫ/ƫ0$!ƫ !,(+5)!*0ƫ !/.%! ƫ%*ƫ0$!ƫė'ƫ!.2!.ƫ3%0$ƫ3+ƫāĀƫ!03+.'ƫ ,0+./Ęƫ/!0%+*ċƫ *(5ƫ( !ƫ$//%/Ģ/,!%üƫ !/%#*ƫ !%/%+*/ƫ3%((ƫ!ƫ %/1//! ƫ/ƫ,.0ƫ+"ƫ0$%/ƫ/!0%+*ċƫ+.ƫ((ƫ+0$!.ƫƫ* ƫ /3%0$ġ.!(0! ƫ+*ü#1.0%+*/Čƫ.!"!.ƫ0+ƫ0$!ƫė'ƫ!.2!.ƫ3%0$ƫ3+ƫāĀƫ!03+.'ƫ ,0+./Ęƫ/!0%+*ƫ+"ƫ0$%/ƫ document. Design Option 1 – Static Configuration $!ƫ+*ü#1.0%+*ƫ+"ƫ0$%/ƫ !/%#*ƫ,,.+$ƫ%/ƫ!40(5ƫ0$!ƫ/)!ƫ/ƫ0$0ƫ !/.%! ƫ%*ƫ0$!ƫė!/%#*ƫ,0%+*ƫāĘƫ/!0%+*ƫ 1* !.ƫė'ƫ!.2!.ƫ3%0$ƫ3+ƫāĀƫ!03+.'ƫ ,0+./ċĘƫ!"!.ƫ0+ƫ(!ƫćƫ"+.ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ƫ !0%(/ċƫ !0Ě/ƫ0'!ƫƫ(++'ƫ0ƫ0$!ƫ( !ƫ/!.2!.Ģ/,!%üƫ,.)!0!./ƫ0$0ƫ.!-1%.!ƫ00!*0%+*ƫ 1.%*#ƫ0$!ƫ !/%#*ċ !03+.'ƫ* ƫ$. 3.!ƫ.!(%%(%05ƫ+*/% !.0%+*/ƫ/$+1( ƫ!ƫ%*+.,+.0! ƫ 1.%*#ƫ0$!ƫ( !ƫ/!.2!.ƫ !/%#*ƫ/ƫ3!((ċƫ In these blade server designs, customers must focus on the following two areas: đƫ%#$ƫ2%(%(%05ƫ+"ƫ( !ƫ/3%0$!/ƫ%*ƫ0$!ƫ( !ƫ$//%/ƫ đƫ+**!0%2%05ƫ+"ƫ( !ƫ/!.2!.ƫ*!03+.'ƫ ,0+./ƫ0+ƫ%*0!.*(ƫ( !ƫ/3%0$!/ %#$ƫ2%(%(%05ƫ+"ƫ( !ƫ/3%0$!/ƫ*ƫ!ƫ$%!2! ƫ5ƫ$2%*#ƫ03+ƫ0$!.*!0ƫ/3%0$%*#ƫ)+ 1(!/ƫ%*ƫ0$!ƫ( !ƫ $//%/ċƫ* ƫ0$!ƫ+**!0%2%05ƫ+"ƫ03+ƫ*!03+.'ƫ ,0+./ƫ+*ƫ0$!ƫ( !ƫ/!.2!.ƫ/$+1( ƫ!ƫ/1$ƫ0$0ƫ+*!ƫ*!03+.'ƫ  ,0+.ƫ%/ƫ+**!0! ƫ0+ƫ0$!ƫü./0ƫ0$!.*!0ƫ/3%0$ƫ)+ 1(!Čƫ* ƫ0$!ƫ+0$!.ƫ*!03+.'ƫ ,0+.ƫ%/ƫ$++'! ƫ0+ƫ0$!ƫ second switch module in the blade chassis. *+0$!.ƫ/,!0ƫ0$0ƫ.!-1%.!/ƫ00!*0%+*ƫ%*ƫ0$!ƫ( !ƫ/!.2!.ƫ !,(+5)!*0ƫ%/ƫ0$!ƫ*!03+.'ƫ* 3% 0$ƫ2%(%(%05ƫ across the midplane of the blade chassis and between the blade switches and aggregation layer. If there is an oversubscription scenario in the deployment, customers must think about utilizing traffic shaping and ,.%+.%0%60%+*ƫĨĉĀĂċā,ƫ0##%*#ĩƫ"!01.!/ƫ2%((!ƫ%*ƫ0$!ƫ2,$!.!ƫ,(0"+.)ċƫ$!ƫ,.%+.%0%60%+*ƫ"!01.!ƫ!*(!/ƫ 1/0+)!./ƫ0+ƫ0#ƫ0$!ƫ%),+.0*0ƫ0.þƫ+)%*#ƫ+10ƫ+"ƫ0$!ƫ2,$!.!ƫ,(0"+.)ċƫ$!/!ƫ$%#$ġ,.%+.%05Ģ0##! ƫ,'!0/ƫ are then treated according to priority by the external switch infrastructure. During congestion scenarios, the switch will drop lower-priority packets first and avoid dropping the important, high-priority packets. This static design option provides customers with the flexibility to choose different network adaptors for %û!.!*0ƫ0.þƫ05,!/ċƫ+3!2!.Čƫ3$!*ƫ +%*#ƫ0$!ƫ0.þƫ((+0%+*ƫ+*ƫƫ(%)%0! Čƫ03+ƫāĀƫ*!03+.'ƫ ,0+./Čƫ  )%*%/0.0+./ƫ1(0%)0!(5ƫ3%((ƫ/$! 1(!ƫ)1(0%,(!ƫ0.þƫ05,!/ƫ+*ƫƫ/%*#(!ƫ ,0+.ċƫ/ƫ)1(0%,(!ƫ0.þƫ05,!/ƫý+3ƫ through one adaptor, the chances of one traffic type’s dominating others increases. To avoid the performance %),0ƫ+"ƫ0$!ƫė*+%/5ƫ*!%#$+./ĘƫĨ +)%*0%*#ƫ0.þƫ05,!ĩČƫ1/0+)!./ƫ)1/0ƫ10%(%6!ƫ0$!ƫ0.þƫ)*#!)!*0ƫ0++(/ƫ ,.+2% ! ƫ%*ƫ0$!ƫ2,$!.!ƫ,(0"+.)ċƫ*!ƫ+"ƫ0$!ƫ0.þƫ)*#!)!*0ƫ"!01.!/ƫ%/ƫ Čƫ* ƫ0$0ƫ"!01.!ƫ%/ƫ10%(%6! ƫ%*ƫ ė!/%#*ƫ,0%+*ƫĂČĘƫ3$%$ƫ%/ƫ !/.%! ƫ%*ƫ0$!ƫ"+((+3%*#ƫ/!0%+*ċ Design Option 2 – Dynamic Configuration with NIOC and LBT $%/ƫ 5*)%ƫ+*ü#1.0%+*ƫ,,.+$ƫ%/ƫ!40(5ƫ0$!ƫ/)!ƫ/ƫ0$0ƫ !/.%! ƫ%*ƫ0$!ƫė!/%#*ƫ,0%+*ƫĂĘƫ/!0%+*ƫ 1* !.ƫė'ƫ!.2!.ƫ3%0$ƫ3+ƫāĀƫ!03+.'ƫ ,0+./ċĘƫ!"!.ƫ0+ƫ(!ƫĈƫ"+.ƫ0$!ƫ 2,+.0#.+1,ƫ+*ü#1.0%+*ƫ !0%(/ƫ* ƫ ƫ/!00%*#/ċƫ$!ƫ,$5/%(ƫ/3%0$Ģ.!(0! ƫ+*ü#1.0%+*ƫ%*ƫ0$!ƫ( !ƫ$//%/ƫ !,(+5)!*0ƫ%/ƫ0$!ƫ/)!ƫ /ƫ0$0ƫ !/.%! ƫ%*ƫ0$!ƫ.'ƫ/!.2!.ƫ !,(+5)!*0ċƫ+.ƫ0$!ƫ( !ƫ!*0!.Ģ/,!%üƫ.!+))!* 0%+*ƫ+*ƫ.!(%%(%05ƫ* ƫ traffic management, refer to the previous section. VMware recommends this design option, which utilizes the advanced VDS features and provides customers with ƫ 5*)%ƫ* ƫý!4%(!ƫ !/%#*ƫ,,.+$ċƫ%0$ƫ0$%/ƫ !/%#*Čƫ ĥƫ.!/+1.!/ƫ.!ƫ10%(%6! ƫ!û!0%2!(5ƫ* ƫ /ƫ.!ƫ)!0ƫ based on the shares allocation.

TECH N I C AL WH ITE PAPE R / 26

VMware vSphere Distributed Switch Best Practices

Blade Server with Hardware-Assisted Logical Network Adaptors (HP Flex-10– or Cisco UCS–like Deployment) Some of the new blade chassis support traffic management capabilities that enable customers to carve I/O .!/+1.!/ċƫ$%/ƫ%/ƫ$%!2! ƫ5ƫ,.+2% %*#ƫ(+#%(ƫ*!03+.'ƫ ,0+./ƫ"+.ƫ0$!ƫ%ƫ$+/0/ċƫ */0! ƫ+"ƫ03+ƫāĀƫ *!03+.'ƫ ,0+./Čƫ0$!ƫ%ƫ$+/0ƫ*+3ƫ/!!/ƫ)1(0%,(!ƫ,$5/%(ƫ*!03+.'ƫ ,0+./ƫ0$0ƫ+,!.0!ƫ0ƫ %ûƫ!.!*0ƫ +*üƫ#1.(!ƫ/,!! /ċƫ/ƫ/$+3*ƫ%*ƫ%#1.!ƫĈČƫ!$ƫ%ƫ$+/0ƫ%/ƫ,.+2% ! ƫ3%0$ƫ!%#$0ƫ0$!.*!0ƫ*!03+.'ƫ ,0+./ƫ 0$0ƫ.!ƫ.2! ƫ+10ƫ+"ƫ03+ƫāĀƫ*!03+.'ƫ ,0+./ċ Cluster 1 VM

VM

VM

VM

VM

VM

VM

VM

Cluster 2 VM

VM

VM

VM

VM

VM

VM

VM

vSphere Distributed Switch ESXi

ESXi

ESXi

ESXi

Access Layer

Legend: Aggregation Layer

PG-A PG-B

Figure 7. Multiple Logical Network Adaptors

$%/ƫ !,(+5)!*0ƫ%/ƫ-1%0!ƫ/%)%(.ƫ0+ƫ0$0ƫ+"ƫ0$!ƫ.'ƫ/!.2!.ƫ3%0$ƫ!%#$0ƫāƫ*!03+.'ƫ ,0+./ċƫ+3!2!.Čƫ%*/0! ƫ +"ƫāƫ*!03+.'ƫ ,0+./Čƫ0$!ƫ,%05ƫ+"ƫ!$ƫ*!03+.'ƫ ,0+.ƫ%/ƫ+*üƫ#1.! ƫ0ƫ0$!ƫ( !ƫ$//%/ƫ(!2!(ċƫ *ƫ0$!ƫ blade chassis, customers can carve out different capacity network adaptors based on the need of each traffic 05,!ċƫ+.ƫ!4),(!Čƫ%"ƫ% ƫ0.þ ƫƫƫ*!! /ƫĂċĆƫ+"ƫ* 3% 0$Čƫƫ(+#%(ƫ*!03+.'ƫ ,0+.ƫ3%0$ƫ0$0ƫ)+1*0ƫ+"ƫ I/O resources can be created on the blade chassis and provided for the blade server. /ƫ"+.ƫ0$!ƫ+*üƫ#1.0%+*ƫ+"ƫ0$!ƫƫ* ƫ( !ƫ$//%/ƫ/3%0$ƫ%*"./0.101.!Čƫ0$!ƫ+*üƫ#1.0%+*ƫ !/.%! ƫ%*ƫ ė!/%#*ƫ,0%+*ƫāĘƫ1* !.ƫė'ƫ!.2!.ƫ3%0$ƫ%#$0ƫāƫ!03+.'ƫ ,0+./Ęƫ%/ƫ)+.!ƫ.!(!2*0ƫ"+.ƫ0$%/ƫ !,(+5)!*0ċƫ The static configuration option described in that design can be applied as is in this blade server environment. Refer to Table 2 for the dvportgroup configuration details and switch configurations described in that section for physical switch configuration details. $!ƫ-1!/0%+*ƫ*+3ƫ%/ƫ3$!0$!.ƫ ƫ,%(%05ƫ /ƫ*5ƫ2(1!ƫ%*ƫ0$%/ƫ/,!%üƫƫ( !ƫ/!.2!.ƫ !,(+5)!*0ċƫ ƫ%/ƫ a traffic management feature that helps in scenarios where multiple traffic types flow through one uplink or *!03+.'ƫ ,0+.ċƫ "ƫ%*ƫ0$%/ƫ,.0%1(.ƫ !,(+5)!*0ƫ+*(5ƫ+*!ƫ0.þ ƫƫƫ05,!ƫ%/ƫ//%#*! ƫ0+ƫƫ/,!%üƫƫ0$!.*!0ƫ*!03+.'ƫ  ,0+.Čƫ0$!ƫ ƫ"!01.!ƫ3%((ƫ*+0ƫ ƫ*5ƫ2(1!ċƫ+3!2!.Čƫ%"ƫ)1(0%,(!ƫ0.þ ƫƫƫ05,!/ƫ.!ƫ/$! 1(! ƫ+2!.ƫ+*!ƫ*!03+.'ƫ  ,0+.Čƫ1/0+)!./ƫ*ƫ)'!ƫ1/!ƫ+"ƫ ƫ0+ƫ//%#*ƫ,,.+,.%0!ƫ/$.!/ƫ0+ƫ %ûƫ!.!*0ƫ0.þ ƫƫƫ05,!/ċƫ$%/ƫ ƫ +*üƫ#1.0%+*ƫ3%((ƫ!*/1.!ƫ0$0ƫ* 3% 0$ƫ.!/+1.!/ƫ.!ƫ((+0! ƫ0+ƫ0.þ ƫƫƫ05,!/ƫ* ƫ0$0ƫ /ƫ.!ƫ)!0ċƫ

TECH N I C AL WH ITE PAPE R / 27

VMware vSphere Distributed Switch Best Practices

/ƫ*ƫ!4),(!Čƫ(!0Ě/ƫ+*/% !.ƫƫ/!*.%+ƫ%*ƫ3$%$ƫ2 +0%+*ƫ* ƫ% ƫ0.þƫ%/ƫ..%! ƫ+2!.ƫ+*!ƫăƫ(+#%(ƫ1,(%*'ċƫ +ƫ,.+0!0ƫ0$!ƫ% ƫ0.þƫ".+)ƫ*!03+.'ġ%*0!*/%2!ƫ2 +0%+*ƫ0.þČƫ )%*%/0.0+./ƫ*ƫ+*ü#1.!ƫ ƫ* ƫ allocate shares to each traffic type. If the two traffic types are equally important, administrators can configure /$.!/ƫ3%0$ƫ!-1(ƫ2(1!/ƫĨāĀƫ!$ĩċƫ%0$ƫ0$%/ƫ+*ü#1.0%+*Čƫ3$!*ƫ0$!.!ƫ%/ƫƫ+*0!*0%+*ƫ/!*.%+Čƫ ƫ3%((ƫ)'!ƫ /1.!ƫ0$0ƫ0$!ƫ% ƫ,.+!//ƫ3%((ƫ#!0ƫ$("ƫ+"ƫ0$!ƫāƫ1,(%*'ƫ* 3% 0$ƫ* ƫ2+% ƫ$2%*#ƫ*5ƫ%),0ƫ+*ƫ0$!ƫ vMotion process. VMware recommends that the network and server administrators work closely together when deploying the traffic management features of the VDS and blade chassis. To achieve the best end-to-end quality of service (QoS) result, a considerable amount of coordination is required during the configuration of the traffic management features.

Operational Best Practices "0!.ƫƫ1/0+)!.ƫ/1!//"1((5ƫ !/%#*/ƫ0$!ƫ2%.01(ƫ*!03+.'ƫ%*"./0.101.!Čƫ0$!ƫ*!40ƫ$((!*#!/ƫ.!ƫ$+3ƫ0+ƫ !,(+5ƫ 0$!ƫ !/%#*ƫ* ƫ$+3ƫ0+ƫ'!!,ƫ0$!ƫ*!03+.'ƫ+,!.0%+*(ċƫ 3.!ƫ,.+2% !/ƫ2.%+1/ƫ0++(/Čƫ /Čƫ* ƫ,.+! 1.!/ƫ0+ƫ help customers effectively deploy and manage their network infrastructure. The following are some key tools available in the vSphere platform: đƫ 3.!ƫ2,$!.!IJƫ+))* ġ %*!ƫ *0!."!ƫĨ2,$!.!ƫ ĩ đƫ 3.!ƫ2,$!.!IJƫ đƫ%.01(ƫ*!03+.'ƫ)+*%0+.%*#ƫ* ƫ0.+1(!/$++0%*# – !0(+3 – +.0ƫ)%..+.%*# In the following section, we will briefly discuss how vSphere and network administrators can utilize these tools to manage their virtual network. Refer to the vSphere documentation for more details on the tools.

VMware vSphere Command-Line Interface vSphere administrators have several ways to access vSphere components through vSphere interface options, %*(1 %*#ƫ 3.!ƫ2,$!.!IJƫ(%!*0ĴČƫ2,$!.!ƫ!ƫ(%!*0Čƫ* ƫ2,$!.!ƫ+))* ġ %*!ƫ *0!."!ċƫ$!ƫ2,$!.!ƫ CLI command set enables administrators to perform configuration tasks by using a vSphere vCLI package %*/0((! ƫ+*ƫ/1,,+.0! ƫ,(0"+.)/ƫ+.ƫ5ƫ1/%*#ƫ 3.!ƫ2,$!.!IJƫ *#!)!*0ƫ//%/0*0ƫĨ2 ĩċƫ!"!.ƫ0+ƫ0$!ƫ Getting Started with vSphere CLI document for more details on the commands: http://www.vmware.com/support/developer/vcli. The entire networking configuration can be performed through vSphere vCLI, helping administrators automate the deployment process.

VMware vSphere API The networking setup in the virtualized datacenter involves configuration of virtual and physical switches.  3.!ƫ$/ƫ,.+2% ! ƫ /ƫ0$0ƫ!*(!ƫ*!03+.'ƫ/3%0$ƫ2!* +./ƫ0+ƫ#!0ƫ%*"+.)0%+*ƫ+10ƫ0$!ƫ2%.01(ƫ infrastructure, which helps them to automate the configuration of the physical switches and the overall process. +.ƫ!4),(!Čƫ2!*0!.ƫ*ƫ0.%##!.ƫ*ƫ!2!*0ƫ"0!.ƫ0$!ƫ2 +0%+*ƫ,.+!//ƫ+"ƫƫ2%.01(ƫ)$%*!ƫ%/ƫ,!."+.)! ċƫ"0!.ƫ receiving this event trigger and related information, the network vendors can reconfigure the physical switch ,+.0ƫ,+(%%!/ƫ/1$ƫ0$0ƫ3$!*ƫ0$!ƫ2%.01(ƫ)$%*!ƫ)+2!/ƫ0+ƫ*+0$!.ƫ$+/0Čƫ0$!ƫ ĥ!//ƫ+*0.+(ƫ(%/0ƫĨ ĩƫ configurations are migrated along with the virtual machine. Multiple networking vendors have provided this 10+)0%+*ƫ!03!!*ƫ,$5/%(ƫ* ƫ2%.01(ƫ%*"./0.101.!ƫ+*ü#1.0%+*/ƫ0$.+1#$ƫ%*0!#.0%+*ƫ3%0$ƫ2,$!.!ƫ /ċƫ Customers should check with their networking vendors to learn whether such an automation tool exists that will bridge the gap between physical and virtual networking and simplify the operational challenges.

TECH N I C AL WH ITE PAPE R / 2 8

VMware vSphere Distributed Switch Best Practices

Virtual Network Monitoring and Troubleshooting Monitoring and troubleshooting network traffic in a virtual environment require similar tools to those available in the physical switch environment. With the release of vSphere 5, VMware gives network administrators the ability 0+ƫ)+*%0+.ƫ* ƫ0.+1(!/$++0ƫ0$!ƫ2%.01(ƫ%*"./0.101.!ƫ0$.+1#$ƫ"!01.!/ƫ/1$ƫ/ƫ!0(+3ƫ* ƫ,+.0ƫ)%..+.%*#ċƫ !0(+3ƫ,%(%05ƫ+*ƫƫ %/0.%10! ƫ/3%0$ƫ(+*#ƫ3%0$ƫƫ!0(+3ƫ+((!0+.ƫ0++(ƫ$!(,/ƫ)+*%0+.ƫ,,(%0%+*ƫý+3/ƫ and measures flow performance over time. It also helps in capacity planning and ensuring that I/O resources are utilized properly by different applications, based on their needs. +.0ƫ)%..+.%*#ƫ,%(%05ƫ+*ƫƫ %/0.%10! ƫ/3%0$ƫ%/ƫƫ2(1(!ƫ0++(ƫ0$0ƫ$!(,/ƫ*!03+.'ƫ )%*%/0.0+./ƫ !1#ƫ *!03+.'ƫ%//1!/ƫ%*ƫƫ2%.01(ƫ%*"./0.101.!ċƫ.*1(.ƫ+*0.+(ƫ+2!.ƫ)+*%0+.%*#ƫ%*#.!//Čƫ!#.!//ƫ+.ƫ((ƫ0.þƫ+"ƫƫ,+.0ƫ helps administrators fine-tune what traffic is sent for analysis.

vCenter Server on a Virtual Machine /ƫ)!*0%+*! ƫ!.(%!.Čƫ2!*0!.ƫ!.2!.ƫ%/ƫ+*(5ƫ1/! ƫ0+ƫ,.+2%/%+*ƫ* ƫ)*#!ƫƫ+*ü#1.0%+*/ċƫ1/0+)!./ƫ*ƫ choose to deploy it on a virtual machine or a physical host, depending on their management resource design requirements. In case of vCenter Server failure scenarios, the VDS will continue to provide network connectivity, but no VDS configuration changes can be performed. 5ƫ !,(+5%*#ƫ2!*0!.ƫ!.2!.ƫ+*ƫƫ2%.01(ƫ)$%*!Čƫ1/0+)!./ƫ*ƫ0'!ƫ 2*0#!ƫ+"ƫ2,$!.!ƫ,(0"+.)ƫ"!01.!/ƫ /1$ƫ/ƫ2,$!.!ƫ%#$ƫ2%(%(%05ƫĨĩƫ* ƫ 3.!ƫ1(0ƫ+(!.*!ƫĨ1(0ƫ+(!.*!ĩƫ0+ƫ,.+2% !ƫ$%#$!.ƫ.!/%(%!*5ƫ to the management plane. In such deployments, customers must pay more attention to the network configurations. This is because if the networking for a virtual machine hosting vCenter Server is misconfigured, the network +**!0%2%05ƫ+"ƫ2!*0!.ƫ!.2!.ƫ%/ƫ(+/0ċƫ$%/ƫ)%/+*ü#1.0%+*ƫ)1/0ƫ!ƫü4! ċƫ+3!2!.Čƫ1/0+)!./ƫ*!! ƫ2!*0!.ƫ !.2!.ƫ0+ƫü4ƫ0$!ƫ*!03+.'ƫ+*ü#1.0%+*ƫ!1/!ƫ+*(5ƫ2!*0!.ƫ!.2!.ƫ*ƫ+*ü#1.!ƫƫċƫ/ƫƫ3+.'ġ.+1* ƫ0+ƫ this situation, customers must connect to the host directly where the vCenter Server virtual machine is running through vSphere Client. Then they must reconnect the virtual machine hosting vCenter Server to a VSS that is (/+ƫ+**!0! ƫ0+ƫ0$!ƫ)*#!)!*0ƫ*!03+.'ƫ+"ƫ$+/0/ċƫ"0!.ƫ0$!ƫ2%.01(ƫ)$%*!ƫ.1**%*#ƫ2!*0!.ƫ!.2!.ƫ%/ƫ reconnected to the network, it can manage and configure VDS. !"!.ƫ0+ƫ0$!ƫ+))1*%05ƫ.0%(!ƫė%.01(ƫ $%*!ƫ+/0%*#ƫƫ2!*0!.ƫ!.2!.ƫ!/0ƫ.0%!/Ęƫ"+.ƫ#1% *!ƫ.!#. %*#ƫ the deployment of vCenter on a virtual machine: $00,čĥĥ+))1*%0%!/ċ2)3.!ċ+)ĥ/!.2(!0ĥ %2!!.2(!0ĥ,.!2%!3+ 5ĥāąĀĉĊġāĀĂġāġāćĂĊĂĥ  $+/0!/0.%0%!/ċ$0)(.

Conclusion ƫ 3.!ƫ2,$!.!ƫ %/0.%10! ƫ/3%0$ƫ,.+2% !/ƫ1/0+)!./ƫ3%0$ƫ0$!ƫ.%#$0ƫ)!/1.!ƫ+"ƫ"!01.!/Čƫ,%(%0%!/ƫ* ƫ +,!.0%+*(ƫ/%),(%%05ƫ"+.ƫ !,(+5%*#ƫƫ2%.01(ƫ*!03+.'ƫ%*"./0.101.!ċƫ/ƫ1/0+)!./ƫ)+2!ƫ+*ƫ0+ƫ1%( ƫ,.%20!ƫ+.ƫ ,1(%ƫ(+1 /Čƫƫ,.+2% !/ƫ0$!ƫ/(%(%05ƫ*1)!./ƫ"+.ƫ/1$ƫ !,(+5)!*0/ċƫ 2*! ƫ,%(%0%!/ƫ/1$ƫ/ƫ ƫ * ƫ ƫ.!ƫ'!5ƫ"+.ƫ$%!2%*#ƫ!00!.ƫ10%(%60%+*ƫ+"ƫ ĥƫ.!/+1.!/ƫ* ƫ"+.ƫ,.+2% %*#ƫ!00!.ƫ /ƫ"+.ƫ2%.01(%6! ƫ business-critical applications and multitenant deployments. Support for standard networking visibility and )+*%0+.%*#ƫ"!01.!/ƫ/1$ƫ/ƫ,+.0ƫ)%..+.%*#ƫ* ƫ!0(+3ƫ$!(,/ƫ )%*%/0.0+./ƫ)*#!ƫ* ƫ0.+1(!/$++0ƫƫ2%.01(ƫ infrastructure through familiar tools. VDS also is an extensible platform that enables integration with other *!03+.'%*#ƫ2!* +.ƫ,.+ 10/ƫ0$.+1#$ƫ+,!*ƫ2,$!.!ƫ /ċ

TECH N I C AL WH ITE PAPE R / 2 9

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-vSPHR-DIST-SWTCH-PRCTICES-USLET-101