To allow specific tenant networks to span multiple virtual subnets (and thus IP subnets), VSIDs can also be grouped into a single customer network that is then uniquely identified by using a Routing Domain ID (RDID). In these situations, Network Virtualization's isolation will be enforced on the level of the defined customer networks. This is another difference between Network Virtualization virtual networks and traditional VLANs: VLANs can be linked only to a single IP subnet.
Network Virtualization requires only a Server 2012 Hyper-V host. With Network Virtualization the guest OS in the VM is totally unaware that its IP address is being virtualized. From the VM's perspective, all communication occurs using its CA. This also means that a VM that is part of a Network Virtualization–based network can run any OS: not only Windows 8 and Server 2012, but also older Windows versions and other OSs.
Figure 1 illustrates how Network Virtualization is implemented under the hood. Basically, Network Virtualization is implemented as a new network driver (called ms_netwnv) that can be bound to physical network adapter cards on each Server 2012 physical server and virtual server. The new Hyper-V virtual switch, which I'll come back to later in this article, calls on this network driver to encapsulate and de-encapsulate Network Virtualization network packets.
Figure 1: Implementing Network Virtualization as a Network Filter Driver
Transporting and Routing Network Virtualization
To transport and route IP packets with virtualized CAs across the physical network, Network Virtualization can use two mechanisms. The first mechanism is based on the Generic Routing Encapsulation (GRE) tunneling protocol that's defined in Request for Comments (RFCs) 2784 and 2890. In this context, GRE is used to encapsulate the network packets that are generated by a VM (with a CA) into packets that are generated by the host (with a PA). Together with other cloud industry players (e.g., HP, Intel, Emulex, Dell), Microsoft has submitted a draft to the Internet Engineering Task Force (IETF) to make the Network Virtualization variation of GRE (called NVGRE) a standard.
The second mechanism, IP address rewrite, can be compared with Network Address Translation (NAT). This mechanism rewrites packets with virtualized CAs to packets with PAs, which can be routed across the physical network.
At the time of writing, IP address rewrite is better suited for VMs with high throughput requirements (i.e., 10Gbps) because it can leverage network adapter card hardware-level offload mechanisms such as large send offload (LSO) and virtual machine queue (VMQ). A big disadvantage of IP address rewrite is that it requires one unique PA for each VM CA. Otherwise, differentiating and routing the network packets from and to VMs that belong to different tenants with overlapping IP addresses would not be possible.
Because GRE requires only one PA per host, Microsoft recommends using NVGRE over IP address rewrite for Network Virtualization. NVGRE can be implemented without making changes to the physical network switch architecture. NVGRE tunnels are terminated on the Hyper-V hosts and process all the encapsulating and de-encapsulating of GRE network traffic.
One disadvantage of GRE is that it can't leverage the network adapter card hardware-level offload mechanisms. Therefore, if you plan to use NVGRE to virtualize high network throughputs, I advise you to wait for the availability of network adapter cards that support NVGRE offloading. Microsoft expects vendors to release such cards for Server 2012 later this year. When using GRE, also watch out for existing firewalls that might have default GRE blocking rules. Always make sure that such firewalls are reconfigured to allow GRE (IP Protocol 47) tunnel traffic.
Implementing Network Virtualization
The process for implementing and configuring Hyper-V 3.0 Network Virtualization is different than that for setting up VLANs in Hyper-V. You can configure VLANs from the Hyper-V Manager in the VM network adapter settings. Network Virtualization configuration isn't part of a VM's configuration and can't be done from the Hyper-V Manager. This is because Network Virtualization is based on specific policies that are enforced on the virtual-switch level of a Hyper-V host.
To define Network Virtualization policies locally on the host, you must use Windows PowerShell scripts. To define the policies centrally, you can use the Microsoft System Center Virtual Machine Manager (VMM) Service Pack 1 (SP1) GUI. VMM is Microsoft's unified management solution for VMs. In larger Network Virtualization environments, I strongly recommend that you leverage VMM. VMM can run the correct PowerShell cmdlets on your behalf and enforce the Network Virtualization policies on the Hyper-V host through local System Center host agents. An important limitation at the time of this writing is that the VMM GUI can be used only to define IP address rewrite policies. To define NVGRE policies, you must use PowerShell scripts. To get you started configuring Network Virtualization by using PowerShell, Microsoft provides a sample PowerShell script in its Simple Hyper-V Network Virtualization Demo.
When you want to use NVGRE, you should also plan for Network Virtualization gateway functionality. A Network Virtualization gateway is needed to enable a VM on a virtual network to communicate outside of that virtual network. A Network Virtualization gateway understands and knows the Network Virtualization address-mapping policies. The gateway can translate network packets that are encapsulated with NVGRE to non-encapsulated packets, and vice versa.
No comments:
Post a Comment